[comp.unix.wizards] NFS Security: a summary

tpc@leibniz.UUCP (Tom Chmara) (09/03/88)

Sorry for the long delay in actually getting a reply out; between
going on course and losing my feed, it's taken a while to get it
together...

My request dealt with NFS security:  how easy is it to break.
I got a number of replies, most of which had a common theme:  NFS
isn't particularly secure.

Excerpts follow...note that some messages (containing duplicate
information) were not republished.  Referenced messages were not printed
in favour of printing the messages containing the references.
 I appreciate your input, but I'd like to keep this posting as short as I can.

-----------------------------------------------------------------------------

From: arosen@eagle.ulowell.edu (MFHorn)
Date: 14 Aug 88 19:35:46 GMT


An NFS server maps uid 0 from incoming RPC requests to 'nobody', which
is configured into the kernel.  If 'nobody' is set to 0, then anyone
with root access on another machine can get it on yours.  The default
setting for nobody is (in most implementaions) -2.

Also, if you don't export any filesystmes to a particular host, that
host can do nothing to your host even if nobody is set to 0.  [NFS
under Ultrix maps nobody per exported filesystem.]

>From article <23289@labrea.Stanford.EDU>, by karish@denali.stanford.edu
	(Chuck Karish):
> Some implementations of NFS assume that user ID numbers are congruent
> on server and client.  This means that a bad guy can empower a
> Trojan horse on the remotely-mounted filesystem, then use it from
> the server machine to get privileged access.

If root access is refused (see above), then the bad guy won't be able
to create a set-uid root file on the server.

> Do current versions of NFS provide a way for managers to control mapping
> of user ID's?

The kernel can only map uid 0.  Yellow Pages, a service provided with
NFS, helps managers maintain a network-wide password file.

-----------------------------------------------------------------------------

From: chris@mimsy.UUCP (Chris Torek)
Date: 15 Aug 88 20:47:18 GMT
Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742

In article <8610@swan.ulowell.edu> arosen@eagle.ulowell.edu (MFHorn) writes:
>An NFS server maps uid 0 from incoming RPC requests to 'nobody', which
>is configured into the kernel. ... The default setting for nobody is
>(in most implementaions) -2.

This mapping is almost useless.  If I am root on machine sneaky.edu,
and want to be anyone else on machine uptight.edu, all I have to do
is set my uid on sneaky.  Granted, I cannot do anything as uid 0 on
uptight, but I can do anything as anyone else.

>Also, if you don't export any filesystmes to a particular host, that
>host can do nothing to your host even if nobody is set to 0.

*snicker*

Actually, this almost works in some NFS implementations.  In old SunOSes
(I have no current ones so I have no idea if it has been fixed there),
all I have to do is cobble up a request packet that claims my hostname
is one to which you do export some file system, and your mount daemon
will believe me.  It does not even check the Internet address, just the
name I stuff in my request packet!

Even if you fix this, all I have to do is make up a suitable file handle.
That can be anywhere from trivial (passive spying will show some fine
handles) to somewhat hard.  What is needed is real authentication.

-----------------------------------------------------------------------------

From: bnr-di!munnari!cluster.cs.su.oz.au!rex
To: munnari!bnr-di!leibniz!tpc
Subject: NFS security
Cc: oz.au!chris@softway.ARPA

The speaker was quite correct. NFS is broken. From being root an a
machine you can corrupt ANY UNIX machine that has common filesystems
with the initially corrupted machine. There is not one hole, but it is
a collection of holes that all add up to swiss cheese. DO NOT, and I repeat,
DO NOT allow your machines to share filesystems with suspect machines.
Do not allow foreign people onto your machines. If you have the SUNRPC
code underneath your NFS then you are in more trouble. Are you incharge
of security? If so how do I know that you are?

					Rex di Bona.
					rex@cluster.cs.su.oz
					Basser Dept. Comp. Sci.
					University of Sydney, 2006,
					N.S.W. AUSTRALIA.


	/*
	 *  aside:  no, I am not in charge of security; as being one of
	 *  the more UNIX-literate at this site, I've been asked to
	 *  investigate.
	 *  I do, however, consider this a specious inquiry:  in cryptography,
	 *  no algorithm is considered secure whose secrecy relies on the
	 *  secrecy of the algorithm.  Ergo, better disseminate the info
	 *  now, and weep a little less later on.  
	 */
----------------------------------------------------------------------------

Thanks to all for replying.
Hope this information is useful to someone/anyone.

	---tpc---
-- 
------------------------------------------------------------------------
Tom Chmara			UUCP:  ..utgpu!bnr-vpa!bnr-di!leibniz!tpc
BNR Ltd.  			BITNET: TPC@BNR.CA

kai@uicsrd.csrd.uiuc.edu (09/08/88)

I haven't seen anyone mention ANY security problems involving NFS that don't
require you already have the keys to the kingdom.  Plain unpriviledged joe
user has exactly the same access on remote file systems as he does on local
file systems, no more, no less.

OF COURSE if you already have superuser priviledges to system x, it doesn't
matter if you have NFS access to system y or not, you probably have the power
to cause damage on other systems.  If you want to discuss the standard
hollyweird horror movie theme as applied to systems administration, read on:

As I understand it (as systems administrator for multiple Sequent hosts
sharing disks between them using NFS), NFS requires that uid numbers be
identical on client and server (for example, user 900 is the same person on
both systems).  Typically in environments where this is true, the server
"trusts" the client (via /etc/hosts.equiv), which applies to rlogin and rsh
commands too.  Even if you don't have NFS, if you're root, you can su to
another user and simply rlogin in to other hosts and wreak havok.  NFS
doesn't provide any additional hazard.  And in environments where hosts don't
trust each other, root can search for ~user/.netrc files or special aliases
and scripts to get remote system access.

However, this is arguing a pretty silly point.  If you're a superuser, you
probably aren't the type of person that intentionally tries to cause damage
to systems.  If access to the superuser account is your problem, then NFS
isn't any additional threat.  If you can't trust a remote system
administrator, then your system probably shouldn't 'trust' their system, and
you shouldn't export file systems to them.

On our Sequent systems (DYNIX V3) with NFS, it IS possible to mount file
systems with the option to ignore the setuid bit on files.  I would like to
know if this is an NFS, Sequent, 4.2 BSD or 4.3 BSD feature.

In general, I like NFS.  I've recovered disk space now that user's can share
a home directory between all systems, and groups that needed just a medium
amount space on multiple systems can share one large partition between
systems, instead of having one medium one on each system.  Now, when the
system I'm using is being crushed by some other user, affecting my work, I
just login to another system that isn't so heavily loaded, and continue my
work.  We don't need to keep separate source code directories, timesheet
information, event calendars, etc. on each system anymore.

The only problems I've had with NFS deal with some parallel programs when
executables reside on remote file systems, performance problems versus local
file systems, Sun's PC/NFS not allowing access to all the user's groups on
the unix system, and the inevitible argument about whether hard or soft
mounted file systems are better (we use soft - I hate it when 'df' hangs
because a system is down for backups, and we don't have any critical database
operations that simply MUST wait for the remote system to respond).

Patrick Wolfe  (pwolfe@kai.com, kailand!pwolfe)

chris@mimsy.UUCP (Chris Torek) (09/09/88)

In article <43200038@uicsrd.csrd.uiuc.edu> kai@uicsrd.csrd.uiuc.edu writes:
>I haven't seen anyone mention ANY security problems involving NFS that don't
>require you already have the keys to the kingdom.  [root access somewhere]

If you have a workstation on your desk, you have root access to that
workstation.  It may take a while to break in, but if I have physical
access to your machines, I have root access to your machines.  It is
as simple as that (which may not be simple!).
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

madd@bu-cs.BU.EDU (Jim Frost) (09/09/88)

In article <43200038@uicsrd.csrd.uiuc.edu> kai@uicsrd.csrd.uiuc.edu writes:
|I haven't seen anyone mention ANY security problems involving NFS that don't
|require you already have the keys to the kingdom.  Plain unpriviledged joe
|user has exactly the same access on remote file systems as he does on local
|file systems, no more, no less.
|
|OF COURSE if you already have superuser priviledges to system x, it doesn't
|matter if you have NFS access to system y or not, you probably have the power
|to cause damage on other systems.

One thing you might not have thought of.  Consider the case of a Sun
workstation, relatively public (ie user doesn't know the root password
of either the workstation or the server).  How could a user make a
mess of this?  Hot-key the workstation to the firmware, boot single
user.  The 1 to 1 mapping of uid's insures that the user need only
alter the local passwd file to be able to do virtually anything to the
remote filesystem, so long as whatever they're changing isn't
root-only writable.  I haven't looked into NFS group permissions, but
the same type of problem would be even more disastrous if group
permissions carry over NFS (consider group wheel on most systems).
Luckily it's a little more complicated than this to actually become
root on the remote system, but if you can be anyone else....

|On our Sequent systems (DYNIX V3) with NFS, it IS possible to mount file
|systems with the option to ignore the setuid bit on files.  I would like to
|know if this is an NFS, Sequent, 4.2 BSD or 4.3 BSD feature.

It's not an issue.  You don't mount disks you don't trust.  In the
other direction, someone using a local setuid program to try to futz
with an NFS connection to a remote disk, the root uid doesn't carry so
you can't hurt anything.  Or at least it's a little harder.

|Patrick Wolfe  (pwolfe@kai.com, kailand!pwolfe)

jim frost
adt!madd@bu-it.bu.edu

mishkin@apollo.COM (Nathaniel Mishkin) (09/09/88)

In article <13457@mimsy.UUCP> chris@mimsy.UUCP (Chris Torek) writes:
>In article <43200038@uicsrd.csrd.uiuc.edu> kai@uicsrd.csrd.uiuc.edu writes:
>>I haven't seen anyone mention ANY security problems involving NFS that don't
>>require you already have the keys to the kingdom.  [root access somewhere]
>
>If you have a workstation on your desk, you have root access to that
>workstation.  It may take a while to break in, but if I have physical
>access to your machines, I have root access to your machines.  It is
>as simple as that (which may not be simple!).

Not even to mention an IBM PC that supports UDP/IP.  Bring up SUN RPC
and start making those NFS requests with the uid of your choice.  Even
simpler, you could just start with PC/NFS.  (Yes, I know how glassy my
house is too.)  Ah, what a fool's paradise we're all living in.  I'm
waiting for some Chernobyl of computer security to hit before people wake
up to the exposure.  "Oh, but I *trust* all those machines in my network."
Hmmph.  If you have more than 10, you just can't.

-- 
                    -- Nat Mishkin
                       Apollo Computer Inc., Chelmsford, MA
                       mishkin@apollo.com

eirik@tekcrl.TEK.COM (Eirik Fuller) (09/10/88)

In article <43200038@uicsrd.csrd.uiuc.edu> kai@uicsrd.csrd.uiuc.edu writes:
>
> ...
>
>As I understand it (as systems administrator for multiple Sequent hosts
>sharing disks between them using NFS), NFS requires that uid numbers be
>identical on client and server (for example, user 900 is the same person on
>both systems).  Typically in environments where this is true, the server
>"trusts" the client (via /etc/hosts.equiv), which applies to rlogin and rsh
>commands too.  Even if you don't have NFS, if you're root, you can su to
>another user and simply rlogin in to other hosts and wreak havok.  NFS
>doesn't provide any additional hazard.

This is valid in some environments, and in fact clears up some of my
confusion about why NFS was implemented the way it was, but it misses
the point entirely about what's wrong with NFS.

> ...
>.  If access to the superuser account is your problem, then NFS
>isn't any additional threat.  If you can't trust a remote system
>administrator, then your system probably shouldn't 'trust' their system, and
>you shouldn't export file systems to them.
>

This statement pretty much summarizes what's wrong with NFS.  With
.rhosts, individual users can give away their own files.  With NFS,
only an adminstrator can give away files, and only an entire file
system at a time (I'm aware of exceptions to this, but they weren't
part of the original NFS design).

Todd Brunhoff's RFS at least had the sense to use the .rhosts mechanism
so that users could give away their own files.  I realize it has
problems of its own, but in this one respect it makes much more sense
in the usual BSD environment than the NFS mechanism.  I suspect the
NFS approach is better performace-wise, but nonetheless it is far
worse than rsh from a security viewpoint.

geoff@eagle_snax.UUCP ( R.H. coast near the top) (09/12/88)

In article <3e5d8f8f.13422@apollo.COM>, mishkin@apollo.COM (Nathaniel Mishkin) writes:
> Not even to mention an IBM PC that supports UDP/IP.  Bring up SUN RPC
> and start making those NFS requests with the uid of your choice.  Even
> simpler, you could just start with PC/NFS.
C'mon, Nat, I'll buy you a Samuel Smith's ale if you can correctly patch all
of the PC-NFS internal data structures to do this. The only reasonable
way of breaking it would be to run a rogue PCNFSD somewhere, which (once
again) assumes super-user access on some system.

When people point out the lack of security in the current generation of
distributed architectures, I usually reply that the mechanisms are there
to stop people from making fools of themselves (e.g. inadvertantly
deleting a colleague's file, or maybe an OS file) or from stumbling across
material they shouldn't see. In most of the companies we work for, the
real security is on the periphery of the building, network, whatever:
inside the shell we usually make the convenience/security trade-off
in favor of convenience. Fortunately personal idiosyncrasy and
love of complexity provide a second line of defense through intimidation...

>   Ah, what a fool's paradise we're all living in. 

Just focus on the "paradise" bit :-)

>                     -- Nat Mishkin
-- 
Geoff Arnold, Sun Microsystems Inc.+------------------------------------------+ 
PC Distrib. Sys. (home of PC-NFS)  |If you do nothing, you will automatically |
UUCP:{hplabs,decwrl...}!sun!garnold|receive our Disclaimer of the Month choice|
ARPA:geoff@sun.com                 +------------------------------------------+

haynes@ucscc.UCSC.EDU (99700000) (09/12/88)

In article <350@eagle_snax.UUCP> geoff@eagle_snax.UUCP ( R.H. coast near the top) writes:
>... In most of the companies we work for, the
>real security is on the periphery of the building, network, whatever:
>inside the shell we usually make the convenience/security trade-off
>in favor of convenience. 

This may be all very well in the business environment, but in the
academic world we view things rather differently.  Here the potential
intruders and their potential victims are already inside the shell.

> Fortunately personal idiosyncrasy and
>love of complexity provide a second line of defense through intimidation...

or an interesting challenge for the bored but clever student.
haynes@ucscc.ucsc.edu
haynes@ucscc.bitnet
..ucbvax!ucscc!haynes

peter@ficc.uu.net (Peter da Silva) (09/13/88)

In article <3e5d8f8f.13422@apollo.COM>, mishkin@apollo.COM (Nathaniel Mishkin) writes:
> Ah, what a fool's paradise we're all living in.  I'm
> waiting for some Chernobyl of computer security to hit before people wake
> up to the exposure.  "Oh, but I *trust* all those machines in my network."
> Hmmph.  If you have more than 10, you just can't.

Just how heavy a load (in CPU power, throughput, and un-unix-like behaviour)
does the system used by Project Athena impose?
-- 
Peter da Silva  `-_-'  Ferranti International Controls Corporation.
"Have you hugged  U  your wolf today?"            peter@ficc.uu.net

wesommer@athena.mit.edu (Bill Sommerfeld) (09/15/88)

[ Disclaimer: "we" refers to Project Athena, not my current employer,
Apollo Computer ]

In article <1487@ficc.uu.net>, peter@ficc (Peter da Silva) writes:
>Just how heavy a load (in CPU power, throughput, and un-unix-like behaviour)
>does the system used by Project Athena impose?

CPU power needed: one hash table lookup based on source IP address and
the user ID in the source packet per NFS remote procedure call.  We
use a fairly stupid hash function (the sum of the "interesting" bytes
of the IP address and the user ID, modulo 256) as an index into a
256-element array; overflows are chained off in a linked list (which
should be self-reorganizing, but we never bothered to implement that).

Impact on throughput: not noticeable (although we're using VAX
11/750's as fileservers).

I'm not aware of any performance measurements; performance wasn't
really an issue--we had to do this, or else NFS would have been
unusable (for writeable files at least; we handle read-only files
through a different protocol).

By the way, last I checked, the side of the NFS protocol which checked
read-only mounts was the client, not the server.  Don't fool yourself
into thinking that you can export a partition read-only unless it is
mounted read-only _on the server_...

Un-UNIX like behavior: noticeable, but not particularly annoying.  The
biggest problem is doing an "su" in a remote directory which isn't
readable by "nobody".  The berkeley C shell exits SILENTLY if getwd()
fails at startup time (I think the code may actually print an error
message on stderr, but by that time the file descriptors may already
have been moved ..)

Also, because of a bug in the NFS protocol, you have to be sure that
your group set on both client and server match; extra groups on the
client results in "io errors", while extra groups on the server don't
give you any effective access (because the client didn't let you open
the file).

I must point out that it's not really secure--it's spoofable by people
who can set the source IP address on their packets, and it's downright
insecure if the client is a timesharing system (since anyone can send
UDP packets with arbitrary contents)--but it's good enough for an
academic environment, at least for the forseeable future.

					- Bill Sommerfeld
					(now with Apollo Computer, Inc.)

[When I'm done with my current project, I'm going to see if I can
browbeat apollo into doing the same thing for the domain filesystem.
Any apollo customers out there want to make a request for that
feature?]
			

guy@gorodish.Sun.COM (Guy Harris) (09/15/88)

> By the way, last I checked, the side of the NFS protocol which checked
> read-only mounts was the client, not the server.  Don't fool yourself
> into thinking that you can export a partition read-only unless it is
> mounted read-only _on the server_...

I presume you mean "NFS implementation", not "NFS protocol".

The SunOS server code, at least, has checked for read-only mounts since at
least SunOS 3.2.  I don't know at which point these changes made it into the
"portable" NFS source code, or at what point various vendors picked it up.  I
think, however, that this was introduced into SunOS at the same time read-only
exports were introduced, so I would expect any system supporting read-only
exports to do the checks correctly.

Check "nfs_server.c", paying special attention to the "rdonly()" macro defined
at the beginning, which checks whether the file system is exported read-only,
and the procedures that implement operations that modify the file system, which
call "rdonly()" to make sure the file system wasn't exported read-only.

It works; at one point, I mounted a file system read-write which was mounted
read-write on the server but exported read-only.  Attempts to write to it got
EROFS.

> Un-UNIX like behavior: noticeable, but not particularly annoying.  The
> biggest problem is doing an "su" in a remote directory which isn't
> readable by "nobody".  The berkeley C shell exits SILENTLY if getwd()
> fails at startup time (I think the code may actually print an error
> message on stderr, but by that time the file descriptors may already
> have been moved ..)

Yes, this is a misfeature of the C shell.  Bob Gilligan here at Sun put a fix
into the 4.0 version; here is the "diff -c" listing for "sh.dir.c" (I make no
claim that this is correct; the innards of the C shell are one thing of which I
am largely blissfully ignorant):

*** /usr/src/bin/csh/sh.dir.c	Tue Jun 11 18:59:53 1985
--- ./sh.dir.c	Mon Sep 12 11:53:53 1988
***************
*** 37,43 ****
  	else {
  		cp = getwd(path);
  		if (cp == NULL) {
! 			(void) write(2, path, strlen(path));
  			exit(1);
  		}
  	}
--- 37,44 ----
  	else {
  		cp = getwd(path);
  		if (cp == NULL) {
! 			haderr = 1;
! 			printf ("%s\n", path);
  			exit(1);
  		}
  	}

arosen@hawk.ulowell.edu (MFHorn) (09/15/88)

From article <7070@bloom-beacon.MIT.EDU>, by wesommer@athena.mit.edu (Bill Sommerfeld):
> By the way, last I checked, the side of the NFS protocol which checked
> read-only mounts was the client, not the server.  Don't fool yourself
> into thinking that you can export a partition read-only unless it is
> mounted read-only _on the server_...

In Ultrix 2.2, you can export a filesystem read-only, even if it is
mounted read-write on the server.  Do any other NFS implementations
support this?  Do any other vendors plan on implementing this?

It is a very nice feature, especially when you have 30 or 40 diskless
users with physical access to their workstation.

Andy Rosen           | arosen@hawk.ulowell.edu | "I got this guitar and I
ULowell, Box #3031   | ulowell!arosen          |  learned how to make it
Lowell, Ma 01854     |                         |  talk" -Thunder Road
                   RD in '88 - The way it should be