[comp.unix.admin] Unix security additions

PLS@cup.portal.com (Paul L Schauble) (03/08/91)

When unix was first developed, the system gave only minimal attention to
security issues. Lately, this has been a hot topic and a lot of work has
been done to improve Unix security.

I'm curious: What do you think are the five most significant changes or 
additions that have been made to Unix to improve its security?

     ++PLS

drake@drake.almaden.ibm.com (03/10/91)

I don't know about "unix" in general ... looking at AIX V3 in particular,
I suspect they are:

o  Access Control Lists (ACLs) on individual files.
o  Getting the passwords where they can't be publically read
o  Telling me when I log on when the last time I logged on was,
   and how many times someone has tried to log onto my account
   with an invalid password since I last logged on.
o  Eliminating setuid shell scripts
o  Providing alternatives to NFS with better security characteristics
   (AFS's Kerberos-enabled security, for example).


Sam Drake / IBM Almaden Research Center 
Internet:  drake@ibm.com            BITNET:  DRAKE at ALMADEN
Usenet:    ...!uunet!ibmarc!drake   Phone:   (408) 927-1861

jfh@rpp386.cactus.org (John F Haugh II) (03/11/91)

In article <565@rufus.UUCP> drake@drake.almaden.ibm.com writes:
>I don't know about "unix" in general ... looking at AIX V3 in particular,
>I suspect they are:

Regretably most of what you mention here was done first either by
someone else, or done a long time ago.  Worse, most of the vendors
involved in activities you describe below can't agree how to do
it in the first place.

>o  Access Control Lists (ACLs) on individual files.

Multics comes to mind ...

>o  Getting the passwords where they can't be publically read

This was done for AIX v2, but has also been done with SVR3.2 and
BSD.  No one has solved certain problems with transparency - that
is, making shadowed passwords look and feel like old-style
publically readable passwords.  This means all the programs that
used to think pw_passwd was valid are wrong ;-(.  Making matters
worse, AT&T, BSD, and IBM all fail to converge on a single
mechanism (and AT&T fails to agree on a single file format for
there various releases).  So you have a non-standard,
non-transparent feature ...

>o  Telling me when I log on when the last time I logged on was,
>   and how many times someone has tried to log onto my account
>   with an invalid password since I last logged on.

This has been a VAX/VMS feature for quite a while, and has been
available in public domain UNIX login systems for several years.
One neat thing IBM has added is event auditing so password
failures can be monitored and handled in real time.  On the bad
side, they don't use syslog(), so BSD people are left out cold.

>o  Eliminating setuid shell scripts

IBM has yet to actually do this, although BSD has recommended you
don't use the feature and AT&T has allegedy fixed the holes and put
them back.  It is still possible in AIX v3 to exploit the same old
security holes in setuid shell scripts that existed years ago in
BSD setuid shell scripts.

The next four security features to be added will be doing the above
four correctly and in a manner which the entire industry can agree
upon.  There is nothing worse than a feature that is useless because
it acts different ways on different platforms.
-- 
John F. Haugh II        | Distribution to  | UUCP: ...!cs.utexas.edu!rpp386!jfh
Ma Bell: (512) 832-8832 | GEnie PROHIBITED :-) |  Domain: jfh@rpp386.cactus.org
"I've never written a device driver, but I have written a device driver manual"
                -- Robert Hartman, IDE Corp.

rcpieter@wsinfo11.info.win.tue.nl (Tiggr) (03/11/91)

PLS@cup.portal.com (Paul L Schauble) writes:

>I'm curious: What do you think are the five most significant changes or 
>additions that have been made to Unix to improve its security?

Which brings up the question of the largest still existing security
leak:  Why does UNIX still trust the network (ethernet in most cases)
it is attached to?  Nothing is simpler than plugging a PC into an
ethernet (for instance using a PC at a publicly accessible place) and
watch the packets go by.  Five minutes waiting brings you a lot of
passwords.  When will internet packets start being encrypted?

Just an itchy feeling...
Tiggr

craig@bacchus.esa.oz.au (Craig Macbride) (03/13/91)

In <565@rufus.UUCP> drake@drake.almaden.ibm.com writes:

>I don't know about "unix" in general ... looking at AIX V3 in particular,
>I suspect they are:

>o  Access Control Lists (ACLs) on individual files.
>o  Getting the passwords where they can't be publically read

These are both designed to be non-standard and break other people's software.
I'd call them good if they didn't do that.

>o  Telling me when I log on when the last time I logged on was,
>   and how many times someone has tried to log onto my account
>   with an invalid password since I last logged on.

The only really good one of the lot. It can (and should) be implemented and
doesn't provide problems.

>o  Eliminating setuid shell scripts

A good idea in theory, but the security of the system is still largely a
matter of how it's administered. Why shouldn't people who want to use setuid
shell scripts be allowed to? Because IBM or AT&T says so? I don't really
think that's a good enough reason. Vendors shouldn't provide setuid shell
scripts in their distribution, but there is no reason why people should not
be able to use them. This is like censorship in concept: If people think
using setuid scripts is a bad idea (which it usually is), they don't have to
use them. If every construct in C which has the possibility of being abused
had been removed from the language, there wouldn't be a whole lot left.

>o  Providing alternatives to NFS with better security characteristics

Another excuse to make yet another non-standard piece of software. But then,
who really believes that AIX is Unix? :-)

-- 
 _____________________________________________________________________________
| Craig Macbride, craig@bacchus.esa.oz.au      | Hardware:                    |
|                                              |      The parts of a computer |
|   Expert Solutions Australia                 |        which you can kick!   | 

tchrist@convex.COM (Tom Christiansen) (03/14/91)

From the keyboard of craig@bacchus.esa.oz.au (Craig Macbride):
:>o  Eliminating setuid shell scripts
:
:A good idea in theory, but the security of the system is still largely a
:matter of how it's administered. Why shouldn't people who want to use setuid
:shell scripts be allowed to? Because IBM or AT&T says so? 

Do you think they say for just for the sake of pervicacity?  There are
two very good reasons for not running suid scripts.

The first is that there is a well-known and almost never-fixed race
condition in the kernel by which the mere presence of a setuid root
script on your system will allow anyone to become root who can make a
link to that file.  This has been known for many years, but almost no
vendor fixes it.  At most, if you're very lucky, they disable it.  You
should complain bitterly to your vendor if they've done neither, as
they are being negligent.  Most are.

You might like to know that Maarten Litmaath's indir program can be
used to circumvent this bug.

The second reason for never using setuid shell scripts is that the
amount of effort you have to go through to guarantee their security
even once the aforementioned bug is fixed is truly exhaustive.  Merely
wrapping the shell script with a setuid C wrapper does nothing to deal
with all these problems.  I have appended a posting detailing these.

Two more quick points before that, though.  First, the perl language had
as one of its design goals the ability to write secure programs in it.  A
suid program written in perl is often safer than a C program, because the
run-time system catches brain-dead errors that neither shell scripts nor C
programs check for.  Check out the section on "Setuid Scripts" in the perl
man page or the perl book for details.

The other thing is that often I have a brief program that I believe
gives you the ability to grant someone edit rights on a file from a
suid program without giving away the whole farm.  It does this by
putting the file in a chrooted directory while it's being edited.  I'll
be happy to mail this out to anyone who asks, or to post it if there's
sufficient interest.

--tom



------- Forwarded Message

Date:         10 Aug 90 19:41:32 GMT From:         vlb@magic.apple.com
(Vicki Brown) Subject:      Re: Suid script security Organization:
Apple Computer Newsgroups:   comp.unix.questions


In article <14920003@hpdmd48.boi.hp.com> markw@hpdmd48.boi.hp.com (Mark
Wolfe) writes:  > >    I know that suid scripts are a bad idea from
reading comp.questions and >comp.wizards over the last year or so. It
seems that just about every guru >in the world has posted a warning NOT
to do it, so I decided I would follow >the advice (it's a rare subject
that all guru's agree on). However, it appears >that I'm now about to
have one of these ugly animals forced on me from above, >so I'd like
some advice:  > > 1)  Just what are the security risks involved? (i.e.
how would someone attack >     a system via one of these).  > > 2)
What can I do to make this as secure as possible?

Warning - very long response ahead.  Proceed at your own risk.  There
was a very interesting paper in the USENIX Association's publication,
 ;login: ( "How To Write a Setuid Program", Matt Bishop, ;login:  Vol
12, Number 1, January/February 1987).  An excerpt:

    Some versions of UNIX allow command scripts, such as shell scripts,
    to be made setuid ... Unfortunately, given the power and complexity
    of many command interpreters, it is often possible to force them to
    perform actions which were not intended, and which allow the user
    to violate system security.  This leaves the owner of the setuid
    script open to a devastating attack.  In general, such scripts
    should be avoided.

    ... suppose a site has a setuid script of sh commands.  An attacker
    simply executes the script in such a way that the shell ... appears
    to have been invoked by a person logging in.  UNIX applies the
    setuid bit on the script to the shell, and ... it becomes
    interactive...

    One way to avoid having a setuid script is to turn off the setuid
    bit on the script, and ... use a setuid [binary] program to invoke
    it.  This program should take care to call the command interpreter
    by its full path name, and reset environment information such as
    file descriptors and environment variables to a known state.
    However, this method should be used only as a last resort and as a
    temporary measure, since with many command interpreters it is
    possible even under these conditions to force them to take
    undesirable action.

The biggest problem with shell scripts is that you (the programmer /
administrator) have very little control over the programs which run
within the script.  As a very real example, I ran across a script which
allowed users to enter bug reports, using the "vi" editor.  The script
was setuid root, because it wanted to save files in funny places.  The
programmer had guarded against shell escapes (a known feature of vi),
by making this script the login shell.  However, he couldn't guard
against another feature
	:e /etc/passwd

You can attempt to make your script as safe as possible by
	1) being very restrictive in your choice of UID.  That is,
	   make the script setuid for a non-privileged user, rather
	   than root (for example, if it must write a log file, could
	   the log file live in some locked area, accessed only by a
	   new and otherwise non-privileged account?) 2) making the
	script setgid rather than setuid, with a very
	   restricted GID (see #1) 3) ensuring that the script is
	short, very simple, and does not
	   make use of commands such as `vi', `mail' or anything
	   interactive.  setuid programs should do ONE thing only, and
	   in a non-complex manner.  4) setting the PATH, IFS, and
	other environment variables explicitly
	   within the script 5) locking down the permissions on the
	script.  If possible allow it
	   to be run only by group members.  Never allow write
	   permission.  6) If your version of UNIX permits, take away
	read permission for
	   anyone other than the owner.  It's a bit harder to break
	   something if you can't see how it works.  7) Rewrite it in C
	(carefully) 8) Convince your management that they don't really
	need this.

If you plan to keep the script, or re-write it, try and get a copy of
the paper.  If you can't find it, send me mail.
   Vicki Brown   A/UX Development Group         Apple Computer, Inc.
   Internet: vlb@apple.com                      MS 58A, 10440 Bubb Rd.
   UUCP: {sun,amdahl,decwrl}!apple!vlb          Cupertino, CA  95014
   USA
	      Ooit'n Normaal Mens Ontmoet?  En..., Beviel't?  (Did you
	  ever meet a normal person?  Did you enjoy it?)

------- End of Forwarded Message

jfh@rpp386.cactus.org (John F Haugh II) (03/14/91)

In article <1921@bacchus.esa.oz.au> craig@bacchus.esa.oz.au (Craig Macbride) writes:
>In <565@rufus.UUCP> drake@drake.almaden.ibm.com writes:
>>o  Access Control Lists (ACLs) on individual files.
>>o  Getting the passwords where they can't be publically read
>
>These are both designed to be non-standard and break other people's software.
>I'd call them good if they didn't do that.

There is NO standard for ACLs - POSIX 1003.6 is still not soup
yet, and when I argued to pick Draft 9 and stick with that until
POSIX Dot6 =was= soup, someone pointed out that there was soon
going to be YetAnotherDot6Draft.

As for shadowed passwords, it is worth pointing out that there
is NO standard for that yet either.  AT&T changed the format
of the shadow data from SVR3.2 to SVR4.  BSD is just catching
on to the idea, etc.  I have argued with the current security
department guys to have SVR4-compatible library routines for
getting the shadowed data, but I don't know what they are doing
with that suggestion.  Coding up a set of getspent(3) routines
wouldn't take much effort.  I'd do it if I had a S/6000 I could
access from home (hint, hint).

>>o  Eliminating setuid shell scripts
>
>A good idea in theory, but the security of the system is still largely a
>matter of how it's administered.

They should be removed, but only because they are a giant
security hole.  IBM has not, despite Drake's claim, removed
setuid shell scripts from the system.  For that matter, most
of the other vendors haven't either ...
-- 
John F. Haugh II        | Distribution to  | UUCP: ...!cs.utexas.edu!rpp386!jfh
Ma Bell: (512) 832-8832 | GEnie PROHIBITED :-) |  Domain: jfh@rpp386.cactus.org
"I've never written a device driver, but I have written a device driver manual"
                -- Robert Hartman, IDE Corp.

woods@eci386.uucp (Greg A. Woods) (03/15/91)

In article <39950@cup.portal.com> PLS@cup.portal.com (Paul L Schauble) writes:
> When unix was first developed, the system gave only minimal attention to
> security issues. Lately, this has been a hot topic and a lot of work has
> been done to improve Unix security.

Excuse me, but IMHO, when UNIX was first developed, *more* attention
was put into careful consideration of security issues than with almost
any other system of its time (except maybe for MULTICS).  A
significant patent was even granted to one of the inventors for a
very innovative systems security technique.

> I'm curious: What do you think are the five most significant changes or 
> additions that have been made to Unix to improve its security?

The most significant "things" that have affected UNIX security in the
past few years are the perpetuation of myths about how insecure some
people perceive UNIX to be.  In addition, partially because of a large
amount of ignorance, UNIX security has been mangled by well meaning
vendors who were pushed by clients who believed the myths.

The only other significant thing I can think of is "the network".
Many network tools have introduced significant security problems to
UNIX where none existed in isolated systems.  Eg. sendmail, finger, & nfs.

Of course the things most people might have been thinking of are the
various implementations of "Orange Book" security features for UNIX.
-- 
							Greg A. Woods
woods@{eci386,gate,robohack,ontmoh,tmsoft}.UUCP		ECI and UniForum Canada
+1-416-443-1734 [h]  +1-416-595-5425 [w]  VE3TCP	Toronto, Ontario CANADA
Political speech and writing are largely the defense of the indefensible-ORWELL

pcg@test.aber.ac.uk (Piercarlo Antonio Grandi) (03/18/91)

On 14 Mar 91 23:09:44 GMT, woods@eci386.uucp (Greg A. Woods) said:

woods> In article <39950@cup.portal.com> PLS@cup.portal.com (Paul L
woods> Schauble) writes:

PLS> When unix was first developed, the system gave only minimal attention to
PLS> security issues. Lately, this has been a hot topic and a lot of work has
PLS> been done to improve Unix security.

I would disagree with both statements; Unix was not designed for a
secure environment or for security, but some security mechanisms were
built in anyhow, probably as a result of the author's exposure to
Multics.

woods> Excuse me, but IMHO, when UNIX was first developed, *more*
woods> attention was put into careful consideration of security issues
woods> than with almost any other system of its time (except maybe for
woods> MULTICS).

This is a fairly counterfactual statement. There were systems
(capability based systems for example) designed for much greater
security at the time than Unix could possibly have, and Multics and
these other systems are simply in entirely another league from Unix.

woods> A significant patent was even granted to one of the inventors for
woods> a very innovative systems security technique.

If you really believe what you have written (significant, very
innovative, systems security), I have this nice patent on moving cursors
on a screen using XOR that I can let you have for a song :-( :-( :-(.

PLS> I'm curious: What do you think are the five most significant changes or 
PLS> additions that have been made to Unix to improve its security?

The most significant thing would be a completely different filesystem,
and then a drastic simplification of the programmer's interface, and
then removal of uid 0 privileges, and then system object labeling and a
security manager to enforce security policies on labeled objects. Only
the latter two have been done, and they are the least important...

The filesystem semantics and the system call semantics have become more
complex and hazardous with time, resulting in kernel bloat and other
great opportunities for insecurity.

woods> The most significant "things" that have affected UNIX security in
woods> the past few years are the perpetuation of myths about how
woods> insecure some people perceive UNIX to be.

Unix is a terribly insecure system, if by security we mean something
substantial, like the military think about it. If we mean security as in
not letting hackers have free rein in an office environment, then with
effort and care, once *can* achieve some effective very basic security,
thanks to the thoughtful provision of minimal security primitives.

Just to give examples of the very low level of *real* security problems
of Unix, containment/write down is not addressed, trapdoor problems are
not addressed, file protection granularity is too coarse, etc. It is
possible to get around all these problems, with great effort, and
implementing mechanisms and policies from scratch. Database vendors have
more "securiryt" because they have done precisely that.

woods> In addition, partially because of a large amount of ignorance,
woods> UNIX security has been mangled by well meaning vendors who were
woods> pushed by clients who believed the myths.

No, I would not cathegorize vendors as "weel meaning", because _some_ of
have have real security experts on their staff, and they know perfectly
well that what is being provided is the *illusion* of security, not
security.

See the infamous trapdoor left in most System V/386 products, even those
with purported C2 level security. Also see the ridiculous C2 security
provisions of SCO Unix, which are so cumersome to be a security hazard
in themselves, as they get in the way of ordinary system administration.
if security mechanisms cost a lot of effort and administrative
attention, they will be circumvented.

woods> The only other significant thing I can think of is "the network".
woods> Many network tools have introduced significant security problems
woods> to UNIX where none existed in isolated systems.  Eg. sendmail,
woods> finger, & nfs.

How true. All these networking thingies have been designewd to "work",
without any attention paid to security, recoverability, performance,
portability or other silly ideas. Slowly a process of fixup engineering
is being applied to each of them, in a painful and ultimately
unproductive effort.

woods> Of course the things most people might have been thinking of are
woods> the various implementations of "Orange Book" security features
woods> for UNIX.

The AT&T one using labels and the like is not that bad, even if it has
some defects. SCOMP from Honeywell was/is great. But, as SCOMP somehow
shows, the best approach is to build a secure system from scratch and
emulate Unix on top of it...
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@aber.ac.uk

jat@xavax.com (John Tamplin) (03/18/91)

In article <19099@rpp386.cactus.org> jfh@rpp386.cactus.org (John F Haugh II) writes:
>>o  Getting the passwords where they can't be publically read
>
>This was done for AIX v2, but has also been done with SVR3.2 and
>BSD.  No one has solved certain problems with transparency - that
>is, making shadowed passwords look and feel like old-style
>publically readable passwords.  This means all the programs that
>used to think pw_passwd was valid are wrong ;-(.  Making matters
>worse, AT&T, BSD, and IBM all fail to converge on a single
>mechanism (and AT&T fails to agree on a single file format for
>there various releases).  So you have a non-standard,
>non-transparent feature ...

I am using a SVR3.2.2 system with shadowed passwords, and the interface
provided is getspent() etc.  After hacking one too many programs to use the
new library calls to get the password, I decided the best way to solve the
problem was to have getpwent() look up pw_passwd in the shadow file iff
euid=root.  This way, programs that are supposed to have access have it in
the same old fashion, and programs that don't get some nonsense password
(either ! or x in the implementations I have seen).

Maybe one of these days I will get around to actually writing this.

-- 
John Tamplin						Xavax
jat@xavax.COM						2104 West Ferry Way
...!uunet!xavax!jat					Huntsville, AL 35801

jfh@rpp386.cactus.org (John F Haugh II) (03/18/91)

In article <1991Mar18.030955.13123@xavax.com> jat@xavax.com (John Tamplin) writes:
>I am using a SVR3.2.2 system with shadowed passwords, and the interface
>provided is getspent() etc.  After hacking one too many programs to use the
>new library calls to get the password, I decided the best way to solve the
>problem was to have getpwent() look up pw_passwd in the shadow file iff
>euid=root.  This way, programs that are supposed to have access have it in
>the same old fashion, and programs that don't get some nonsense password
>(either ! or x in the implementations I have seen).

This was tried by myself and others.  There are problems (surprised?)
because you have two locations for the information, and clouding the
issue of "where did that password come from?" causes you to not know
where to put it back to.  It also eats your performance for lunch if
you do more than one or two getpwent() calls.

In the first case, you make a call to getpwnam(), say, to get the
password so your nifty system administration utility can modify it
and put it back - let's call that utility "chfn", which runs setuid
root so it can write the password file.  It does the getpwnam(),
gets the password out of the shadow file (by virtue of running
euid=0) then what?  write it back, but where?  If you do the putpwent()
you will have exported the encrypted password from the shadow file
to the password file.

In the second case, imagine your nifty system utility, call it
finger, reads the password file for a bunch of users.  Each call to
getpwent() implies a call to getspent() (because you invoked finger
as root), and each call to getpwnam() calls getpwent() repeatedly
because that's how it reads the password file.  Remembering that
getspent() is a linear search over a text file, you suddenly realize
that you now have finger() with O(N**2) or so.  DBM-ified password
files would be real handy right about now, but did AT&T give them
to you?

The best solution seems to be to use update the code to use the
correct interface.

>Maybe one of these days I will get around to actually writing this.

Be careful out there ...
-- 
John F. Haugh II        | Distribution to  | UUCP: ...!cs.utexas.edu!rpp386!jfh
Ma Bell: (512) 832-8832 | GEnie PROHIBITED :-) |  Domain: jfh@rpp386.cactus.org
"I've never written a device driver, but I have written a device driver manual"
                -- Robert Hartman, IDE Corp.

jfh@rpp386.cactus.org (John F Haugh II) (03/18/91)

In article <PCG.91Mar17174428@aberdb.test.aber.ac.uk> pcg@test.aber.ac.uk (Piercarlo Antonio Grandi) writes:
>Unix is a terribly insecure system, if by security we mean something
>substantial, like the military think about it. If we mean security as in
>not letting hackers have free rein in an office environment, then with
>effort and care, once *can* achieve some effective very basic security,
>thanks to the thoughtful provision of minimal security primitives.
>
>Just to give examples of the very low level of *real* security problems
>of Unix, containment/write down is not addressed, trapdoor problems are
>not addressed, file protection granularity is too coarse, etc. It is
>possible to get around all these problems, with great effort, and
>implementing mechanisms and policies from scratch. Database vendors have
>more "securiryt" because they have done precisely that.

Security is what you define it to be for the system you are defining
it for.  UNIX was designed to be a nice little operating system for
people to get work done in.  It wasn't designed for spook work.  As
such, the lack of MAC is a feature, not a bug.  That means that it is
just fine for a cooperative little office environment.

MAC and trusted path and so on are real nice, if you need them.  If you
don't, you wind up with the problem you mentioned concerning SCO UNIX -
you have all these features you can't figure out what to do with.  I
can't imagine a system administrator that can't figure out how to
remove a file named "-r" from his home directory [ the answer is "you
have to run mkfs to remove the file" ] being able to set up object
auditing for a random assortment of files.
-- 
John F. Haugh II        | Distribution to  | UUCP: ...!cs.utexas.edu!rpp386!jfh
Ma Bell: (512) 832-8832 | GEnie PROHIBITED :-) |  Domain: jfh@rpp386.cactus.org
"I've never written a device driver, but I have written a device driver manual"
                -- Robert Hartman, IDE Corp.

woods@eci386.uucp (Greg A. Woods) (03/22/91)

In article <PCG.91Mar17174428@aberdb.test.aber.ac.uk> pcg@test.aber.ac.uk (Piercarlo Antonio Grandi) writes:
pcg> On 14 Mar 91 23:09:44 GMT, woods@eci386.uucp (Greg A. Woods) said:
pcg> woods> In article <39950@cup.portal.com> PLS@cup.portal.com (Paul L
pcg> woods> Schauble) writes:
[....]
pcg> I would disagree with both statements; Unix was not designed for a
pcg> secure environment or for security, but some security mechanisms were
pcg> built in anyhow, probably as a result of the author's exposure to
pcg> Multics.
[....]
pcg> woods> Excuse me, but IMHO, when UNIX was first developed, *more*
pcg> woods> attention was put into careful consideration of security issues
pcg> woods> than with almost any other system of its time (except maybe for
pcg> woods> MULTICS).
pcg> 
pcg> This is a fairly counterfactual statement. There were systems
pcg> (capability based systems for example) designed for much greater
pcg> security at the time than Unix could possibly have, and Multics and
pcg> these other systems are simply in entirely another league from Unix.

Perhaps you haven't read Ritchie's paper about UNIX Security recently?
[ Neither have I actually :-) ]

Just because the first tape out of the Labs didn't implement a great
deal of security doesn't mean that careful forethought didn't go into
designing the security mechanisms of UNIX.

pcg> woods> A significant patent was even granted to one of the inventors for
pcg> woods> a very innovative systems security technique.
pcg> 
pcg> If you really believe what you have written (significant, very
pcg> innovative, systems security), I have this nice patent on moving cursors
pcg> on a screen using XOR that I can let you have for a song :-( :-( :-(.

I'm not advocating patents BTW. In fact, I think this particular
patent (the setuid patent) has been placed into the public domain by
AT&T, which IMHO was a very good gesture, though their recent behavior
w.r.t. X-11 leaves me with many reservations about their good intentions.

pcg> Unix is a terribly insecure system, if by security we mean something
pcg> substantial, like the military think about it. If we mean security as in
pcg> not letting hackers have free rein in an office environment, then with
pcg> effort and care, once *can* achieve some effective very basic security,
pcg> thanks to the thoughtful provision of minimal security primitives.

Yes, I mean security in terms of how it might be effectively applied
for a system in a business environment.  UNIX provides for this much
security *easily*, though not often "out-of-the-box".

Although the "military" definition of security has its merits, it is
not entirely relevant to the average MIS department.  In fact, I would
argue that very few MIS departments have anywhere near enough
discipline to implement anything like what the "Orange book" defines
for the higher levels of security.

"Orange book" security (of any significance) *requires* far more than
just software.  Strict implementation of policy, both inside the TCB
and outside (i.e. by the personnel) is necessary to have a secure
*system*.  Some of the highest levels even imply you require armed
guards on the machine room!

As you said, much of the more extensive security that MIS types might
need can be implemented at the applications level (eg. database
security by field/record).  If done intelligently, this can even be
integrated into standard UNIX security, such that a true TCB exists.
IMHO, this is where object-level security belongs in the first place!

I have in the past argued that UNIX can be made C2 secure *without*
kernel changes, i.e *easily*.  Of course that argument hinges on one's
interpretation of the "Orange book".  I admit that since I do not have
a background emphasising military security, my interpretation is
probably quite "loose".  In addition though, I'll even go so far as to
say the "Orange book" is out of date.

Yes, higher levels of security do require some of the features you
mentioned (such as removing the concept of a "superuser").  However, I
have a hard time believing such systems are still UNIX.  I believe
POSIX 1003.1 has still a dependence upon uid-0, though POSIX
1003.2-draft has carefully avoided such dependence.

I stand by my original statement that there has been more obscurity
and myth about security thrown at UNIX than there have been
significant enhancements (such as SecureWare's C2-targeted stuff that
SCO is pushing, or AT&T's SysV/MLS, or Gould's port); and that
eliminating this layer of myth and using the existing features in UNIX
in an organised way will be the most significant thing "we" can do for
UNIX security, even when networks are involved.

Remember, the level of a TCB [Trusted Computing Base] (as defined by
the "Orange book") can be measured by evaluating the following
criteria:  Availability, Confidentiality, Accountability, Integrity,
and Trustworthiness.  What many people think of when they are talking
about "security", and what the "Orange book" spends the most amount of
time on, are confidentiality and accountability.  The other criteria
are often ignored.  Traditional UNIX provides a reasonable level in
all of these criteria, when managed carefully.  Enhancing only the two
criterea I previously mentioned does not, in my books, result in a
higher level TCB.
-- 
							Greg A. Woods
woods@{eci386,gate,robohack,ontmoh,tmsoft}.UUCP		ECI and UniForum Canada
+1-416-443-1734 [h]  +1-416-595-5425 [w]  VE3TCP	Toronto, Ontario CANADA
Political speech and writing are largely the defense of the indefensible-ORWELL

martin@mwtech.UUCP (Martin Weitzel) (04/08/91)

In article <1991Mar22.024124.3238@eci386.uucp> woods@eci386.UUCP (Greg A. Woods) writes:
[In answer to article <PCG.91Mar17174428@aberdb.test.aber.ac.uk> pcg@test.aber.ac.uk (Piercarlo Antonio Grandi)]
...
>Yes, higher levels of security do require some of the features you
>mentioned (such as removing the concept of a "superuser").
...
Well, I know this complaint that UNIX isn't secure because there is
one person who can read the files of all others ... but what if there
were no such privilege?

	- how should checks of the filesystem integrity, backups and
	  restores be done if not some few programs could acces the raw
	  information of the disk?
	- how should new system software be installed?

If their exists a privileged account for the above mentioned activities
(and name the OS on which there is no such account) then the door is open
for installing any program you whish which does anything you whish with
the data on the disk! Furthermore: If there is a person who can do backups
on physically removable media, even if this person has not the privilege
to read all the users data, how do you control what he or she does with
the backups *after* removing the media?

I especially *like* the design of UNIX for making it so clear to everyone
that the things left on the computers disk are by no means more secure as
the things you leave in your office (to which your boss has a key - at least
for a case of emergency).

Again, name the OS on which the things I described here are not possible.
I'm not interested in hearing that they are purely more difficult, e.g.
because there is no "superuser account" and special rights like accessing
the raw disk is only granted to some few programs. You can have this on
UNIX too by simply creating some few new logins with UID 0 but the
mentioned special programs (backup/restore, filesystem check, etc.)
as "login shell". The "real" super user account must only be known for
for extremly few activities, like installing new software and configuring
the kernal.
-- 
Martin Weitzel, email: martin@mwtech.UUCP, voice: 49-(0)6151-6 56 83

jfh@rpp386.cactus.org (John F Haugh II) (04/10/91)

In article <1090@mwtech.UUCP> martin@mwtech.UUCP (Martin Weitzel) writes:
>Well, I know this complaint that UNIX isn't secure because there is
>one person who can read the files of all others ... but what if there
>were no such privilege?
>
>	- how should checks of the filesystem integrity, backups and
>	  restores be done if not some few programs could acces the raw
>	  information of the disk?

There are separate privileges for such things as determining file system
integrity, making backups, restoring files, etc.  For example, someone
in the "system administrator" role would be able to take backups and
perform restores.  There would be a separate mechanism for ensuring
system integrity, since a system which is in an unknown state shouldn't
be used anyhow and there is a difference between "repair" and
"maintenance" activities.

>If their exists a privileged account for the above mentioned activities
>(and name the OS on which there is no such account) then the door is open
>for installing any program you whish which does anything you whish with
>the data on the disk! Furthermore: If there is a person who can do backups
>on physically removable media, even if this person has not the privilege
>to read all the users data, how do you control what he or she does with
>the backups *after* removing the media?

At some point in time you have to trust the people you've hired to do
their jobs.  The point of slicing root privileges up into little pieces
is to make it so you can control what "their job" is.  For example, if
the "administrator" can create any unprivileged account, but only the
"security administrator" can create privileged ones, you can't go from
"administrator" to "privileged user".  Likewise, if you can only restore
files that were backed up using the special utilities, you can't just
put any program you want on the system.  It would have to have been
backed up with whatever enhanced privilege you are trying to restore it
with.  So you can't go from "random tape" to "privileged application"
either.

>Again, name the OS on which the things I described here are not possible.
>I'm not interested in hearing that they are purely more difficult, e.g.
>because there is no "superuser account" and special rights like accessing
>the raw disk is only granted to some few programs.

There are quite a few.  I suggest you read the "Evaluated Products List"
from the NCSC for a sampling of them.

>                                                    You can have this on
>UNIX too by simply creating some few new logins with UID 0 but the
>mentioned special programs (backup/restore, filesystem check, etc.)
>as "login shell". The "real" super user account must only be known for
>for extremly few activities, like installing new software and configuring
>the kernal.

Sure, this is one approach.  Now insure that I can't take my "change any
user account" authority and change my login shell to be /bin/sh, or the
"execute any command" login shell.  You have to insure that no collection
of privileges that a user might have can be combined in some fashion to
grant some other privilege they did not already possess.  By giving them
"all" privilege and sticking them in some particular program, you have to
write that program very carefully.  You must then do the same for every
other program they might execute.  It is far easier to divide the
privileges in one place, and let the kernel manage it, rather than
trying to get it right in every single program the administrators might
execute.
-- 
John F. Haugh II        | Distribution to  | UUCP: ...!cs.utexas.edu!rpp386!jfh
Ma Bell: (512) 832-8832 | GEnie PROHIBITED :-) |  Domain: jfh@rpp386.cactus.org
"If liberals interpreted the 2nd Amendment the same way they interpret the
 rest of the Constitution, gun ownership would be mandatory."

martin@mwtech.UUCP (Martin Weitzel) (04/11/91)

In article <19183@rpp386.cactus.org> jfh@rpp386.cactus.org (John F Haugh II) writes:
jfh> In article <1090@mwtech.UUCP> martin@mwtech.UUCP (I) wrote:

mw>> Well, I know this complaint that UNIX isn't secure because there is
mw>> one person who can read the files of all others ... but what if there
mw>> were no such privilege?
mw>> 
mw>> 	- how should checks of the filesystem integrity, backups and
mw>> 	  restores be done if not some few programs could acces the raw
mw>> 	  information of the disk?

jfh> There are separate privileges for such things as determining file system
jfh> integrity, making backups, restoring files, etc.  For example, someone
jfh> in the "system administrator" role would be able to take backups and
jfh> perform restores.  There would be a separate mechanism for ensuring
jfh> system integrity, since a system which is in an unknown state shouldn't
jfh> be used anyhow and there is a difference between "repair" and
jfh> "maintenance" activities.

That was not quite my point (maybe it was badly stated). My first remark
should alert the reader that every computer system will at least contain
*some* programs that are able to read every user's data. (The difference
under UNIX is that *any* program can be used for it if it is used from
a privileged account.)

mw>> If their exists a privileged account for the above mentioned activities
mw>> (and name the OS on which there is no such account) then the door is open
mw>> for installing any program you whish which does anything you whish with
mw>> the data on the disk! Furthermore: If there is a person who can do backups
mw>> on physically removable media, even if this person has not the privilege
mw>> to read all the users data, how do you control what he or she does with
mw>> the backups *after* removing the media?

jfh> At some point in time you have to trust the people you've hired to do
jfh> their jobs.

Wait a minute: Given the scenario that in a (badly configured) UNIX system
I have to give a privilegded account to those people who have to care for
backups. Now I complain: This is really bad - I don't trust these people and
fear they will use their privilegded account to sneak into other user's files.
Under this circumstances, would it be wise to trust the same people that
they don't take the backup tapes and read them anywhere else? The counter
argument may be that it is much easier to try a "cat someone-elses-file"
than to carry a tape to another system. But then the solution with some
extra accounts that can only be used for certain privileged activities
(as described later) will also suffice, even if there are some loopholes,
as long as a high enough a barrier is placed so that noone can simply
"cat" someone elses files from the backup account.

jfh> The point of slicing root privileges up into little pieces
jfh> is to make it so you can control what "their job" is.  For example, if
jfh> the "administrator" can create any unprivileged account, but only the
jfh> "security administrator" can create privileged ones, you can't go from
jfh> "administrator" to "privileged user".  Likewise, if you can only restore
jfh> files that were backed up using the special utilities, you can't just
jfh> put any program you want on the system.  It would have to have been
jfh> backed up with whatever enhanced privilege you are trying to restore it
jfh> with.  So you can't go from "random tape" to "privileged application"
jfh> either.

In general - as you may have noted - I sympathisize with the idea to split
the rights to do things (change user privileges, do backups, etc.) to
several accounts.

My claim still is that this can be done without changing the kernel, and
that the added security you win *if* you make enhancements to the kernel
is far less than the chance that some people you hired to do their jobs
CAN'T be trusted.

mw>> You can have this [split privileges into small slices] on
mw>> UNIX too by simply creating some few new logins with UID 0 but the
mw>> mentioned special programs (backup/restore, filesystem check, etc.)
mw>> as "login shell". The "real" super user account must only be known for
mw>> for extremly few activities, like installing new software and configuring
mw>> the kernal.

jfh> Sure, this is one approach.  Now insure that I can't take my "change any
jfh> user account" authority and change my login shell to be /bin/sh, or the
jfh> "execute any command" login shell.  You have to insure that no collection
jfh> of privileges that a user might have can be combined in some fashion to
jfh> grant some other privilege they did not already possess.  By giving them
jfh> "all" privilege and sticking them in some particular program, you have to
jfh> write that program very carefully.  You must then do the same for every
jfh> other program they might execute. It is far easier to divide the
jfh> privileges in one place, and let the kernel manage it, rather than
jfh> trying to get it right in every single program the administrators might
jfh> execute.

This is a well taken argument. In general I support the idea to centralize
important things in a few places instead of spreading them throughout the
system. If I bring up a counter argument now, I don't do so to "win the
battle" in this discussion, it's only one point I want to remind the
readers of this thread:

If a really serious security bug should become known (and which product
never had any security bugs?) I much prefer to be able to correct it by
changing some access-rights, rewriting some shell procedures or similar.
As a last resort, I could even rewrite some program (for backups, filesystem
checks or whatever) if I don't have the source code and it exposes some
serious security flaw. (Most preferrably I would replace such a program
with a trusted PD-version which comes with source.) But if there is
a bug in the kernal's security mechanisms, I'm rather helpless and will
have to wait for a fix from the one who has the kernal source for my
system. (And: Recall how long it lasted until ISC recently could be
made to correct the "witable u-area bug" in their product!)

Summary: I think I could be quite happy with the security mechanisms of
the V7 kernal, combined with a new login which stores passwords in a file
with no read-acces for the public, a secure "mkdir" implemented as system
call and preferably the source (or PD-replacements) for all the programs
that are run with EUID == 0, so that I can cure all deficiencies as soon as
they become known. Add to this a setup as described some pragraphes above
with different accounts for certain privileged activities as backups etc.
(if required by the operating environment) and I think I would have a
rather secure system and I would surely NOT demand for changes in the
kernal.
-- 
Martin Weitzel, email: martin@mwtech.UUCP, voice: 49-(0)6151-6 56 83

peter@ficc.ferranti.com (Peter da Silva) (04/12/91)

In article <19183@rpp386.cactus.org> jfh@rpp386.cactus.org (John F Haugh II) writes:
> At some point in time you have to trust the people you've hired to do
> their jobs.  The point of slicing root privileges up into little pieces
> is to make it so you can control what "their job" is.

These two sentences are contradictory: if you can trust them, you don't
need to slice up privileges. If you need to slice up privileges, it's
because you can't trust them.

> Likewise, if you can only restore
> files that were backed up using the special utilities, you can't just
> put any program you want on the system.

Sure: back up to tape, read tape on a non-secure system, edit it, write
it out again, and restore.
-- 
Peter da Silva.  `-_-'  peter@ferranti.com
+1 713 274 5180.  'U`  "Have you hugged your wolf today?"

thomson@hub.toronto.edu (Brian Thomson) (04/12/91)

In article <PONA272@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
>In article <19183@rpp386.cactus.org> jfh@rpp386.cactus.org (John F Haugh II) writes:
>> Likewise, if you can only restore
>> files that were backed up using the special utilities, you can't just
>> put any program you want on the system.
>
>Sure: back up to tape, read tape on a non-secure system, edit it, write
>it out again, and restore.

You don't get a secure installation by buying a secure machine and
putting it in a location where a user can tamper with its backup tapes.
Of course secure systems require physical safeguards!
-- 
		    Brian Thomson,	    CSRI Univ. of Toronto
		    utcsri!uthub!thomson, thomson@hub.toronto.edu

jfh@rpp386.cactus.org (John F Haugh II) (04/15/91)

In article <1092@mwtech.UUCP> martin@mwtech.UUCP (Martin Weitzel) writes:
>jfh> At some point in time you have to trust the people you've hired to do
>jfh> their jobs.
>
>Wait a minute: Given the scenario that in a (badly configured) UNIX system
>I have to give a privilegded account to those people who have to care for
>backups. Now I complain: This is really bad - I don't trust these people and
>fear they will use their privilegded account to sneak into other user's files.

THEN DON'T DO IT.  It makes absolutely no sense whatsoever to have
passwords on the user accounts then to give superuser authority to
someone that you know is going to break into the other user's
accounts.  If you give the authority to modify any user account to
someone you can't trust to not abuse the authority, you have the
same situation.  And so on for every privileged role.

>Under this circumstances, would it be wise to trust the same people that
>they don't take the backup tapes and read them anywhere else?

If you don't have physical security (i.e., they can take the tapes
anywheres they want) and you can't trust your personnel, I'd suggest
you turn off the computer system and just give up.

Basically your complaint is that you must give privileges to people
that you can't trust not to abuse them, and that you can't control
the data once they've take it.  Sounds like you got a rather serious
problem on your hands.  Good luck.

>My claim still is that this can be done without changing the kernel, and
>that the added security you win *if* you make enhancements to the kernel
>is far less than the chance that some people you hired to do their jobs
>CAN'T be trusted.

These are not the same problems.  They aren't even related to each
other.  Particularly since the former is meant to prevent things
that the later can't address, such as people you didn't hire accessing
your system.  The only completely secure computer is sitting in a room,
with no outside connections, powered off, and encased in concrete.  If
you insist on hiring people you think are going to violate the systems
security, there is no point in keeping out the rest of the world.  You've
already given the keys to the bad guys.
-- 
John F. Haugh II        | Distribution to  | UUCP: ...!cs.utexas.edu!rpp386!jfh
Ma Bell: (512) 832-8832 | GEnie PROHIBITED :-) |  Domain: jfh@rpp386.cactus.org
"If liberals interpreted the 2nd Amendment the same way they interpret the
 rest of the Constitution, gun ownership would be mandatory."

edotto@ux1.cso.uiuc.edu (Ed Otto) (04/15/91)

jfh@rpp386.cactus.org (John F Haugh II) writes:

>>Under this circumstances, would it be wise to trust the same people that
>>they don't take the backup tapes and read them anywhere else?

>If you don't have physical security (i.e., they can take the tapes
>anywheres they want) and you can't trust your personnel, I'd suggest
>you turn off the computer system and just give up.

Nice thought...in my case it's a combination lock on the door to the machine
room that, two hours after it was installed, 46 people had the combination
to...

>Basically your complaint is that you must give privileges to people
>that you can't trust not to abuse them, and that you can't control
>the data once they've take it.  Sounds like you got a rather serious
>problem on your hands.  Good luck.

Ya - from me, too.  I simply said "I'll do all of the work."

>These are not the same problems.  They aren't even related to each
>other.  Particularly since the former is meant to prevent things
>that the later can't address, such as people you didn't hire accessing
>your system.  The only completely secure computer is sitting in a room,
>with no outside connections, powered off, and encased in concrete.  If
>you insist on hiring people you think are going to violate the systems
>security, there is no point in keeping out the rest of the world.  You've
>already given the keys to the bad guys.

Yup...once the nasties are out and about your workplace, you've lost the whole
war...I mean, anyone with su access can run the 'adduser' script...and once
THAT happens, well, kiss it goodbye.


*******************************************************************************
*                             *  Netmail addresses:                           *
*  Edward C. Otto III         *    edotto@uipsuxb.ps.uiuc.edu                 *
*  University of Illinois     *    edotto@uiucux1.cso.uiuc.edu                *
*  Printing Services Office   *    UIPSA::OTTO (Decnet node 46.99)            *
*  54A E. Gregory Dr.         *    otto@uipsa.dnet.nasa.gov                   *
*  Champaign, IL  61820       *  Office phone: 217/333-9422                   *
*                             *                                               *
*******************************************************************************

	"As knowledge is to ignorance, so is light unto the darkness."

		       ---     GO 'PODS!     ---
-- 
*******************************************************************************
*                             *  Netmail addresses:                           *
*  Edward C. Otto III         *    edotto@uipsuxb.ps.uiuc.edu                 *
*  University of Illinois     *    edotto@uiucux1.cso.uiuc.edu                *

jpe@egr.duke.edu (John P. Eisenmenger) (04/16/91)

From article <1991Apr15.163013.20421@ux1.cso.uiuc.edu>, by edotto@ux1.cso.uiuc.edu (Ed Otto):
> jfh@rpp386.cactus.org (John F Haugh II) writes:

>>>Under this circumstances, would it be wise to trust the same people that
>>>they don't take the backup tapes and read them anywhere else?

>>If you don't have physical security (i.e., they can take the tapes
>>anywheres they want) and you can't trust your personnel, I'd suggest
>>you turn off the computer system and just give up.

Hmm.  This may seem like a silly idea, but could you set it up that
a user doesn't need root privileges to perform dumps and that the
dumps are encrypted so that only you could decrypt and read the data?
That way: 1, they don't need to know the root password; and 2, they
can't take and read a dump tape on another machine...  This would at
least allow you to offload dumps to someone else.

> Nice thought...in my case it's a combination lock on the door to the machine
> room that, two hours after it was installed, 46 people had the combination
> to...

Yes, combination locks are a total waste of expense.  I'm amazed at how
many people can't remember a 5-digit combination.  Unfortunately there
isn't a way to keep it from spreading like wildfire.

We use combination locks on our workstation rooms and were having the
above problem, so I looked into getting a magnetic card reader for the
door.  These have been installed throughout campus now (even on Coke
machines), so I thought it'd be a reasonable thing to have.  The Uni-
versity wanted about $3000 for the installation, plus $100/mo for
support.  All in all I wish we had keyed locks and charged a deposit
for the keys -- it'd be better all the way around.

>>Basically your complaint is that you must give privileges to people
>>that you can't trust not to abuse them, and that you can't control
>>the data once they've take it.  Sounds like you got a rather serious
>>problem on your hands.  Good luck.

> Yup...once the nasties are out and about your workplace, you've lost the whole
> war...I mean, anyone with su access can run the 'adduser' script...and once
> THAT happens, well, kiss it goodbye.

Not necessarily.  For example I use a piece of software that grabs the data
from protected files that I can keep offline and mount only when needed.  One
run of this software will freeze all unwarranted accounts, thus getting rid of
the meanies...  It also makes tracking classes, graduating students, etc. a
might bit easier.

-John

sanders@cactus.org (Tony Sanders) (04/18/91)

>In article <1092@mwtech.UUCP> martin@mwtech.UUCP (Martin Weitzel) writes:
>>backups. Now I complain: This is really bad - I don't trust these people and
>>fear they will use their privilegded account to sneak into other user's files.
What if the backup/restore utilities on the "secure" system used an
encryption scheme before writting to tape (like dump|crypt|dd of=/dev/mt,
assuming each dump will fit on a single tape).  Then tapes written
on the "secure" system could only be read back by the corresponding
restore utility on that system.  You must of course secure the
new backup/restore utilities from them but that's just SOP.

Restoring the information on an insecure system would be useless,
you have to have the password to use it.

I'm a little behind in this group, pardon me it's this has already been
mentioned.

-- sanders@cactus.org
I am not an IBM representative, I speak only for myself.
I have a wonderful proof that emacs is better than vi,
   unfortunately this .sig is too small to contain it.

cks@hawkwind.utcs.toronto.edu (Chris Siebenmann) (04/20/91)

sgf@cfm.brown.edu (Sam Fulcomer) writes:
[In a discussion of secure backups if you don't necessarily trust your
 operators:]
| Why bother having the operator log in? Have the machines reboot at
| backup time, but with the backup program switched on in the rc (or
| inittab, or whatever...).  After backups are done the machine can come
| up normally. Fine if you want to encrypt the dump, too.

 We do something very similar to this, although for different reasons
(and without the encryption) on a set of student systems. We have
Exabytes for backups, and I wanted to do the backups in single-user
mode. However, the student systems don't have operators around them 24
hours a day; the site person works 9 to 5. The solution was to write a
script that backed up everything (with error checking and logs) to
tape, and another script that did some setup, touched a file off in a
mounted filesystem, and started up a shutdown to single-user mode at
some future time. When the system goes single-user, it runs /.profile,
which checks to see if the file exists; if so, it runs the backup
script and then reboots multiuser.

 So the site person pops the right tape into the drive and queues up the
shutdown-backup before he goes home. Sometime later (typically midnight
these days) the system goes down to single-user mode, backs stuff up,
ejects the tape, and goes back to multi-user mode, all without anyone
around. It's quite nice and very convenient.

 But, you ask, what happens if the system crashes and comes up single-
user in the meantime -- won't it start running the backups? That's
why the trigger file is off in a mounted filesystem, instead of on
the root partition; if the system crashes and reboots single-user,
that partition won't be mounted when /.profile is run, so nothing bad
happens.

--
	"This will be dynamically handled, possibly correctly, in 4.1."
		- Dan Davison on streams configuration in SunOS 4.0
cks@hawkwind.utcs.toronto.edu	           ...!{utgpu,utzoo,watmath}!utgpu!cks

peter@ficc.ferranti.com (Peter da Silva) (04/23/91)

In article <1991Apr12.101319.8523@jarvis.csri.toronto.edu> thomson@hub.toronto.edu (Brian Thomson) writes:
> In article <PONA272@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
> >In article <19183@rpp386.cactus.org> jfh@rpp386.cactus.org (John F Haugh II) writes:
> >> Likewise, if you can only restore
> >> files that were backed up using the special utilities, you can't just
> >> put any program you want on the system.

> >Sure: back up to tape, read tape on a non-secure system, edit it, write
> >it out again, and restore.

> You don't get a secure installation by buying a secure machine and
> putting it in a location where a user can tamper with its backup tapes.

We're not talking about random users here. We're talking about the regular
backup operators.

> Of course secure systems require physical safeguards!

Of course, but who watches the people who work behind those safeguards?
-- 
Peter da Silva.  `-_-'  peter@ferranti.com
+1 713 274 5180.  'U`  "Have you hugged your wolf today?"

jfh@rpp386.cactus.org (John F Haugh II) (04/25/91)

In article <QRXAL18@xds13.ferranti.com>, peter@ficc.ferranti.com (Peter da Silva) writes:
> In article <1991Apr12.101319.8523@jarvis.csri.toronto.edu> thomson@hub.toronto.edu (Brian Thomson) writes:
> > In article <PONA272@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
> > >Sure: back up to tape, read tape on a non-secure system, edit it, write
> > >it out again, and restore.
> 
> > You don't get a secure installation by buying a secure machine and
> > putting it in a location where a user can tamper with its backup tapes.
> 
> We're not talking about random users here. We're talking about the regular
> backup operators.
 
You are assuming that the restore program will allow you to restore
files that were not placed on it using the restore on the system it
came from.

> > Of course secure systems require physical safeguards!
> 
> Of course, but who watches the people who work behind those safeguards?

At some point in time you have to trust the people that you have
given the authority to do these things.  It's like the argument
about the earth riding on the back of turtles.  It can't be
turtles all the way down.
-- 
John F. Haugh II        | Distribution to  | UUCP: ...!cs.utexas.edu!rpp386!jfh
Ma Bell: (512) 255-8251 | GEnie PROHIBITED :-) |  Domain: jfh@rpp386.cactus.org
"If liberals interpreted the 2nd Amendment the same way they interpret the
 rest of the Constitution, gun ownership would be mandatory."

thomson@hub.toronto.edu (Brian Thomson) (04/25/91)

In article <QRXAL18@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
>In article <1991Apr12.101319.8523@jarvis.csri.toronto.edu> thomson@hub.toronto.edu (Brian Thomson) writes:
>
>> You don't get a secure installation by buying a secure machine and
>> putting it in a location where a user can tamper with its backup tapes.
>
>We're not talking about random users here. We're talking about the regular
>backup operators.
>
>> Of course secure systems require physical safeguards!
>
>Of course, but who watches the people who work behind those safeguards?

That depends.

Maybe no-one does - that is the situation at many machine rooms in
this university.

The other extreme is that the operators are watched by security staff.  
Closely.  I mean guards at the doors to make sure that tapes move only between
the archive and the IO room (and certainly not out of the building!),
and they are signed in and out when that happens.  It is also prudent
to divide up duties, so that the person who mounts and dismounts tapes
is not the same person who uses them (i.e. does not have an account that
is privileged to use tapes).

If you feel that the first situation is too lax, or the second too strict,
you have missed the point.  It is in every case a question of cost versus
benefit, and the "benefit" is really the absence of the damage that might be
suffered.  At the university, the possible damage is not great, and we
don't feel that intruders would be highly motivated, so low-cost security
measures are expected to be adequate.  This means we trust our operators
quite a bit, but not because of their exemplary character, because
the overall risk is not high.  Banks, however, don't give the keys to the
vault to any individual - two or three simultaneous keys, given to different
people, is more like it - because the temptation is too strong and the
potential loss too great.

So, in the case of this hypothetical installation, what are the risks?
How inviting a target are you?  If you are not happy with the present
procedures, separation of duty is potent medicine, but it will probably
interfere with productivity and may even require hiring new staff.
Those are part of the cost - that you must balance against the benefit.

-- 
		    Brian Thomson,	    CSRI Univ. of Toronto
		    utcsri!uthub!thomson, thomson@hub.toronto.edu