[comp.os.research] Non-secure workstations

pardo@june.cs.washington.edu (David Keppel) (10/17/88)

crum@lipari.usc.edu (Gary L. Crum) writes:
>bzs@xenna (Barry Shein) writes:
>>[ Optical disk == portable file system ]
>Barry's environment where students boot cubes off their own platters
>poses many interesting security problems!  In such an environment, cubes
>cannot "trust" each other because users have their own system disks
>and hence all users are superusers for their respective machines.

I believe (somebody tell me?) that Andrew and possibly some other
systems have solved this.  When you do remote file accesses (e.g.,
mount some file system) you are required to identify yourself.

The "root partition" (e.g., the root of the file system and basic
binaries) are made public, anybody can boot and use them.  When you
boot the machine, it goes and says "I want to mount root, I am
nobody".  The file server (kept in a locked room :-) says "oh, sure,
anybody can look at these" and sends back mount privilege.

When you log in, there aren't any user files mounted.  You give your
password, this sent to the server (e.g., using public-key
cryptography to keep the password secure), giving sure identification
of "you".  The server looks in its access control list, and if you are
there, lets you mount the file system (e.g., /usr).

Finally, even once you have the file system mounted, you still can't
go and clobber everybody else's files, because the file server is
still ultimately responsible for storing the data; when you try to
write back bogus data it says "you are X, you are trying to write on
Y's data, Y didn't say you could, so no go".  Read access goes excatly
the same way.

    ;-D on  ( If it's so damn secure, how did *I* get the password? )  Pardo
-- 
		    pardo@cs.washington.edu
    {rutgers,cornell,ucsd,ubc-cs,tektronix}!uw-beaver!june!pardo

fouts@lemming. (Marty Fouts) (10/18/88)

In article <5158@saturn.ucsc.edu> pardo@june.cs.washington.edu writes:

   crum@lipari.usc.edu (Gary L. Crum) writes:
   >bzs@xenna (Barry Shein) writes:
   >>[ Optical disk == portable file system ]
   >Barry's environment where students boot cubes off their own platters
   >poses many interesting security problems!  In such an environment, cubes
   >cannot "trust" each other because users have their own system disks
   >and hence all users are superusers for their respective machines.

   I believe (somebody tell me?) that Andrew and possibly some other
   systems have solved this.  When you do remote file accesses (e.g.,
   mount some file system) you are required to identify yourself.

The problem with this is that there is no way to prove that the 'you'
identifying 'yourself' is really you in the presences of promiscuous
or tapable transmission media.  Since the mid-70s, open literature has
existed which suggests ways around authentication schemes.

Sending clear text passwords is obviously wrong, and no one would
force users to do it. (;-0  Sending constant encrypted passwords is
also wrong, although in a slightly more subtle way.  (To beat you, I
simply record a session in which you get a good log in, and then when
I want to fake you, I replay your half of the session.)

Attempts to get around this involve either schemes to make it hard to
forge the physical address and then checking that,  (I. E. which 48
bit transceiver number) or to use schemes which require the
encrypted password to be changed.  The first method is fairly easy to
defeat in Shein's proposed environment, and isn't very useful when the
'you' is moving around anyway.  The second method shows more promis,
but. . .

Anyway, authentication in a hostile network is at best a currently
unsolved problem, and at worse an unsolvable problem.

Marty
--
+-+-+-+     I don't know who I am, why should you?     +-+-+-+
   |        fouts@lemming.nas.nasa.gov                    |
   |        ...!ames!orville!fouts                        |
   |        Never attribute to malice what can be         |
+-+-+-+     explained by incompetence.                 +-+-+-+

riedl@purdue.edu (John T Riedl) (10/19/88)

Needham and Schroeder's paper "Using Encryption for Authentication in
Large Networks of Computers" presents algorithms for authenticating
conversations in a possibly hostile distributed environment.  I
believe these algorithms form the basis for the Project Athena
Kerberos authentication server.

The model is that each entity has a key, and a single authentication
server knows the keys of all entities.  Without going into details,
the basic tricks are 1) use the user's password as a key; 2) include
as part of each encrypted message an additional integer that has never
been used in such a message before (to guard against replays); 3) the
authentication server returns enough (encrypted!) information to the
user that he can identify himself convincingly to his conversation
partner.

In article  <5173@saturn.ucsc.edu> fouts@lemming. (Marty Fouts) writes:
>The problem with this is that there is no way to prove that the 'you'
>identifying 'yourself' is really you in the presences of promiscuous
>or tapable transmission media.  Since the mid-70s, open literature has
>existed which suggests ways around authentication schemes.
>...
>Anyway, authentication in a hostile network is at best a currently
>unsolved problem, and at worse an unsolvable problem.
>
>Marty

Marty, do you know of specific problems with these techniques?  Your
examples of methods that don't work seem naive.  At the least, they
aren't a convincing basis for such strong statements about the
impossibility of authentication.

John
-- 
John Riedl
{ucbvax,decvax,hplabs}!purdue!riedl  -or-  riedl@cs.purdue.edu

wyatt%cfa@husc6.harvard.edu (Bill Wyatt) (10/20/88)

> [...]
> The problem with this is that there is no way to prove that the 'you'
> identifying 'yourself' is really you in the presences of promiscuous
> or tapable transmission media. [...]
> 
> [...]
> Anyway, authentication in a hostile network is at best a currently
> unsolved problem, and at worse an unsolvable problem.

Not true, at least if you allow *some* machines to be trusted.
Check out MIT/Athena's `Kerberos' network authentication system,
which involves having trusted (and presumably physically secure)
systems act as authenticators for other systems, which can be as
physically insecure as you like.
-- 

Bill    UUCP:  {husc6,cmcl2,mit-eddie}!harvard!cfa!wyatt
Wyatt   ARPA:  wyatt@cfa.harvard.edu
         (or)  wyatt%cfa@harvard.harvard.edu
      BITNET:  wyatt@cfa2
        SPAN:  cfairt::wyatt 

fouts@lemming. (Marty Fouts) (10/20/88)

In article <5187@saturn.ucsc.edu> wyatt%cfa@husc6.harvard.edu (Bill Wyatt) writes:

   > 
   > [...]
   > Anyway, authentication in a hostile network is at best a currently
   > unsolved problem, and at worse an unsolvable problem.

   Not true, at least if you allow *some* machines to be trusted.
   Check out MIT/Athena's `Kerberos' network authentication system,
   which involves having trusted (and presumably physically secure)
   systems act as authenticators for other systems, which can be as
   physically insecure as you like.

I vaugely remember reading about work aimed at beating such a system,
but I don't have a reference handy.  However, in a truely hostile
environment the underlying assumption of hostility makes the idea of
trusting *some* machines seem rather silly.  What systems like
Kerberos do (besides giving their users a false sense of security) is
change the problem from faking one user to faking both the user and
the authentication system.

One of the reasons why public-key based signature systems haven't been
widely received is because of their need to depend on a repository
that everybody trusts.  (Another is that not enough users see the need
for such a system, so they aren't yet commercially viable.)

The problem with using even a trusted authentication server is
that before I believe that you are the authentication server, you have
to prove you are the authentication server in the presence of the
possibility that someone else will pretend to be the authentication
server, or attempt to cause me to believe you are not.  In a
nonhostile environment the problem is trivial, but without hostility
not worth solving.

In a hostile environment it appears doable, if you are willing to make
assumptions along the lines of "OK, Kings-X, nobody cheat while I set
up an authentication server, and nobody pretend to be the
authentication server.  Done.  Go ahead cheat now, if you can."

Kerberos increases my confidence that you are the authentication
server but it doesn't guarentee that you are.  If I'm willing to
accept the level of confidence that it provides, than I can claim to
be 'reasonably secure' or 'adequately secure' for my needs.  It still
doesn't make me 'secure.'

Marty

--
+-+-+-+     I don't know who I am, why should you?     +-+-+-+
   |        fouts@lemming.nas.nasa.gov                    |
   |        ...!ames!orville!fouts                        |
   |        Never attribute to malice what can be         |
+-+-+-+     explained by incompetence.                 +-+-+-+

jbs@fenchurch.MIT.EDU (Jeff Siegal) (10/23/88)

In article <5198@saturn.ucsc.edu> fouts@lemming. (Marty Fouts) writes:
>The problem with using even a trusted authentication server is
>that before I believe that you are the authentication server, you have
>to prove you are the authentication server in the presence of the
>possibility that someone else will pretend to be the authentication
>server, or attempt to cause me to believe you are not.

Unless I misunderstand your objection, it is addressed by Kerberos and
similar systems.  Kerberos authenticates the authentication server
(AS).  When you request a Ticket-Granting Ticket (TGT), it arrives
encrypted with your password.  Only the AS (or you) could perform this
encryption; this establishes the authenticity of the AS.  Only you (or
the AS) can decrypt the TGT; this establishes your identity.

Potential weaknesses:

1. Replays.  If a hostile agent wants to impersonate the AS, it can
record an encrypted TGT and replay it to you the next time you ask for
one.  In theory, not hard to protect against.  In practice, not hard
to protect against with a reasonable degree of confidence.

2. The cryptosystem.  If a hostile agent can decrypt the TGT, it can
impersonate you until the TGT expires (currently 8 hours for
Kerberos).  If it can derive the key, it can impersonate you forever
(since the key is your password).  In the current implementation of
Kerberos, the cryptosystem is a replaceable module.  In practice, DES
is currently used.

Jeff Siegal

riedl@purdue.edu (John T Riedl) (10/23/88)

In article  <5198@saturn.ucsc.edu> fouts@lemming. (Marty Fouts) writes:
>but I don't have a reference handy.  However, in a truely hostile
>environment the underlying assumption of hostility makes the idea of
>trusting *some* machines seem rather silly.  What systems like
>Kerberos do (besides giving their users a false sense of security) is
>change the problem from faking one user to faking both the user and
>the authentication system.
>...
>Marty

Marty, I think you need to do some more background reading.  The
concept of "trust" in a hostile environment must include a method for
validating the identity of the trusted host.  I don't know enough
about current research in the area to say that there are no
difficulties with known solutions, but your arguments that the problem
*cannot* be solved are based on some basic misunderstandings.  "Using
Encryption for Authentication in Large Networks of Computers", by
Needham and Schroeder (CACM 12/78) discusses precisely this problem
(e.g.  authentication in an environment in which a hostile party may
seek to represent even a "trusted" server).  They give both
conventional and public-key algorithms for solving the problem.
Here's how their conventional algorithm works:

Background: "A" wants to establish a secure conversation with "B".  A
and B have secret keys, KA and KB respectively, known only to
themselves and their authentication server (called AS).  I'll
represent an encrypted message with {part1, part2, ...}^key.

1) A->AS: A, B, IA
where IA is an integer that is different for each request from A.

Hostile parties may attempt to pretend to be the authentication
server after seeing this message.  The reply from AS foils these
attempts by requiring the use of A's key - known only to A and AS.

2) AS->A: {IA, B, CK, {CK, A}^KB}^KA

CK is a conversation key for A and B to use in communicating.  It is
different for each conversation.  Only AS could create this message
because it requires both KA and KB.  Note that the part of the message
encrypted with KB is useless to A.  It will be used in the next
message to prove to B that AS authenticates A as the source.  Both IA
and B must appear in message 2.  IA prevents recording of AS messages,
and B prevents corruption of the original plaintext message from A to
AS.

3) A->B: {CK, A}^KB

Only B can discover CK.  B knows that AS authenticates A as the
source.  B may be worried that this conversation is a replay, so he
now asks A for proof that it is not:

4) B->A: {IB}^CK

5) A->B: {IB-1}^CK

IB is an integer that B has not used for this purpose before.  If an
intruder is replaying messages, he'll be unable to produce message 5.

-------
If you have further reservations about this approach, I recommend that
you read the paper.  It is well-written and easy to understand.  If
you find a reference that discusses problems with this approach in
detail, please post it.

John

-- 
John Riedl
{ucbvax,decvax,hplabs}!purdue!riedl  -or-  riedl@cs.purdue.edu

guido@uunet.UU.NET (Guido van Rossum) (10/24/88)

In article <5198@saturn.ucsc.edu> fouts@lemming. (Marty Fouts) writes:
>
>[...]  However, in a truely hostile
>environment the underlying assumption of hostility makes the idea of
>trusting *some* machines seem rather silly.  What systems like
>Kerberos do (besides giving their users a false sense of security) is
>change the problem from faking one user to faking both the user and
>the authentication system.
>
>Kerberos increases my confidence that you are the authentication
>server but it doesn't guarentee that you are.  If I'm willing to
>accept the level of confidence that it provides, than I can claim to
>be 'reasonably secure' or 'adequately secure' for my needs.  It still
>doesn't make me 'secure.'

Under these definitions, nothing in the world can "make you secure".
You are as secure as you trust the manager of the authentication
service.  In the end it boils down to trusting other human beings, and
you know what a tricky business *that* is... 

What Kerberos does for you is to provide you with a tool where the
presence of insecure networks between you and a trusted (physically
secure!) authentication service doesn't prevent you from having the same
level of trust in it as when you were in the same room with it (and
disconnected from the outside world).  Not something to skimp on
lightly, and certainly not "giving users a false sense of security".

Unlike what you seem to think, dependence on machines locked away in
physically protected rooms is not a weakness of the system -- it is the
ultimate operational condition of any authentication service.  There is no
reason why an authentication service could not be distributed -- but all
the machines holding sensitive data (user's private keys) should be
physically protected.  Users may trust their local authentication server
more than they trust a remote one -- they might know its manager
personally, for instance -- and adapt their level of trust accordingly
(e.g., you may be willing to exchange mail with a machine authenticated
through a remote authentication server, but you may not trust it to hold
your program sources).

But what if the manager of the authentication service cheats, you may
ask?  The real world has responses to this problem, and they do not
differ from responses to other forms of breach of contract.  You may
call the police, beat him up, sue him, and occasionally you will have to
accept your losses.  You can pay for insurance or live dangerously.  But
don't frustrate the discussion by requesting absolute security.  There
is no such thing (and I didn't write this, either :-).

--
Guido van Rossum, Centre for Mathematics and Computer Science (CWI), Amsterdam
guido@piring.cwi.nl or mcvax!piring!guido or guido%piring.cwi.nl@uunet.uu.net

Michael.Browne@k.gp.cs.cmu.edu (10/25/88)

In article <5198@saturn.ucsc.edu>, fouts@lemming. (Marty Fouts) writes:
> 
> In article <5187@saturn.ucsc.edu> wyatt%cfa@husc6.harvard.edu (Bill Wyatt) writes:
> 
>    > 
>    > [...]
>    > Anyway, authentication in a hostile network is at best a currently
>    > unsolved problem, and at worse an unsolvable problem.
> 
>    Not true, at least if you allow *some* machines to be trusted.
>    Check out MIT/Athena's `Kerberos' network authentication system,
>    [...]
> 
> I vaugely remember reading about work aimed at beating such a system,
> but I don't have a reference handy.  However, in a truely hostile
> environment the underlying assumption of hostility makes the idea of
> trusting *some* machines seem rather silly.  What systems like
> Kerberos do (besides giving their users a false sense of security) is
> change the problem from faking one user to faking both the user and
> the authentication system.
> 
> One of the reasons why public-key based signature systems haven't been
> widely received is because of their need to depend on a repository
> that everybody trusts.  [...]
>
> The problem with using even a trusted authentication server is
> that before I believe that you are the authentication server, you have
> to prove you are the authentication server in the presence of the
> possibility that someone else will pretend to be the authentication
> server, or attempt to cause me to believe you are not.  In a
> nonhostile environment the problem is trivial, but without hostility
> not worth solving.
> 
> In a hostile environment it appears doable, if you are willing to make
> assumptions along the lines of "OK, Kings-X, nobody cheat while I set
> up an authentication server, and nobody pretend to be the
> authentication server.  Done.  Go ahead cheat now, if you can."
> 
> Kerberos increases my confidence that you are the authentication
> server but it doesn't guarentee that you are.  If I'm willing to
> accept the level of confidence that it provides, than I can claim to
> be 'reasonably secure' or 'adequately secure' for my needs.  It still
> doesn't make me 'secure.'

You should read Needham and Shroeder's paper.  It gives ways of performing
authentication given reliable communication.  Neuman et. al.'s Kerberos
system relies on the secret-key authentication algorithm presented there.

Another reason why public-key schemes are not very popular is that they tend
to require lots of CPU time.  Further, some have theoretical problems:  RSA,
for example, was shown by Chor that if you can figure out the last bit of
plaintext given the cryptext with probability greater than 1/2 plus epsilon,
then you can invert RSA.  Also, RSA leaks information:  the value of the
Jacobi function on the cryptext is always identical to the value of the
Jacobi function on the plaintext.  The scheme proposed by Rabin which relies
on the ability of the recipient to extract square roots mod pq, though
provably equivalent to factoring, is susceptible to chosen-cryptext attacks
where the adversary uses the recipient as an oracle to extract square roots.

Let us consider the problem of authentication in a hostile network.  How do
we contact our authentication server?  The solution is relatively simple.
We set up N hosts which serves as authentication servers; each
authentication server uses a different puzzle (chosen independently) used in
zero-knowledge authentication.  The puzzles can be widely published; only
the solutions are kept secret (local to the appropriate server).  By using
0-knowledge authentication, we guarantee that no information about the
solution is leaked out.  When a client needs to obtain authentication
information, it must contact (at least) M of the N hosts and retrieve >=M
copies of authentication information; we must have a quorum of M identical
copies before we'd trust the information.

Note that before we run the authentication protocol, we perform key-exchange
in order to establish a secure channel.  There are (relatively) cheap
key-exchange methods that allow the sender to send a few bits securely; a
public key scheme could also be used.  Having a secure channel ensures that
if our adversary interposes himself in our communication channel he would
not be able to insert erroneous data.  For sending widely-published data, we
don't really need to encrypt the data -- we can simply exchange an
irreducible polynomial which is used to calculate a fingerprint to be
appended to each message.  A fingerprint is a cryptographic checksum which
is unforgeable with high probability when the key (the irreducible
polynomial) is unknown (kept secret).  The algorithm is by Rabin and Karp
and was presented by Karp during is Turing award acceptance lecture.  By
sending a fingerprint with each packet, we ensure that the recipient will
know if any data are corrupted.  This is much more efficient that encryption:
we have an implementation of this that can fingerprint at an excess of 900
Kbytes/second on an IBM APC/RT.

What does the quorum buy us?  Well, if the probability of an intruder
breaking into any one ``secure'' host is p, and our desired level of
security is s (i.e., the probability of an intruder being able to break our
system is s, s < p), we can run N > ceil{log(s)/log(p)} hosts (M : s >= p^M)
to achieve the desired level of security.  We run N > M hosts so the system
can run with one or more servers disabled so denial-of-service by crashing a
few servers is not a problem.  (Of course, this does not address
denial-of-service by somebody cutting your ethernet cable....)

By using zero-knowledge authentication where the authentication puzzles can
be widely published (similarly, public key signature schemes may be used),
we ensure that we contact the hosts that we intended to -- nobody can
pretend to be the authentication server.  By using quorum consensus, we can
lower the probability of an intruder breaking our system arbitrarily.

- -bsy
- -- 
Internet:	bsy@cs.cmu.edu		Bitnet:	bsy%cs.cmu.edu%smtp@interbit
CSnet:	bsy%cs.cmu.edu@relay.cs.net	Uucp:	...!seismo!cs.cmu.edu!bsy
USPS:	Bennet Yee, CS Dept, CMU, Pittsburgh, PA 15213-3890
Voice:	(412) 268-7571

dkhusema@decwrl.dec.com (Dirk Husemann (Inf4 - hiwi)) (11/02/88)

>From article <5182@saturn.ucsc.edu>, by riedl@purdue.edu (John T Riedl):
> 
> Needham and Schroeder's paper "Using Encryption for Authentication in
> Large Networks of Computers" presents algorithms for authenticating
> conversations in a possibly hostile distributed environment.  I
> believe these algorithms form the basis for the Project Athena
> Kerberos authentication server.

	Could you give the source for this paper? I'd like to look into it
for some study on computer research.

> ...
>
> John
> -- 
> John Riedl
> {ucbvax,decvax,hplabs}!purdue!riedl  -or-  riedl@cs.purdue.edu

	Thanx,
		Dirk Husemann

------------------ Smile, tomorrow will be worse! --------------
Email:	dkhusema@immd4.informatik.uni-erlangen.de
Or:	{pyramid,unido}!fauern!faui44!dkhusema
Mail:	Dirk Husemann, Aufsess-Str. 19, D-8520 Erlangen,
(Home)	West Germany
(Busi-	University of Erlangen-Nuremberg, Computer Science Dep.,
ness)	IMMD IV, Martensstr. 1, D-8520 Erlangen, West Germany
Phone:	(Home) +49 9131 302036,	(Business) +49 9131 857908
-- Beam me up, Scotty, there's no intelligent life down here! --
--------------- My opinions are mine, mine, mine ---------------