[comp.os.research] A comment on assigning probabilitis

fouts@lemming. (Marty Fouts) (10/25/88)

In article <5241@saturn.ucsc.edu> Michael.Browne@k.gp.cs.cmu.edu writes:

   [ Early exposition deleted. . .]

   What does the quorum buy us?  Well, if the probability of an intruder
   breaking into any one ``secure'' host is p, and our desired level of
   security is s (i.e., the probability of an intruder being able to break our
   system is s, s < p), we can run N > ceil{log(s)/log(p)} hosts (M : s >= p^M)
   to achieve the desired level of security.  We run N > M hosts so the system
   can run with one or more servers disabled so denial-of-service by crashing a
   few servers is not a problem.  (Of course, this does not address
   denial-of-service by somebody cutting your ethernet cable....)

   By using zero-knowledge authentication where the authentication puzzles can
   be widely published (similarly, public key signature schemes may be used),
   we ensure that we contact the hosts that we intended to -- nobody can
   pretend to be the authentication server.  By using quorum consensus, we can
   lower the probability of an intruder breaking our system arbitrarily.

This appears to be an example of a subtle reasoning flaw I've seen
several times in breakin probability analysis.  There are two
problems.  First, there is the apparent tacit assumption that
arbitrary reduction of probability equates to 0 probability.

More importantly, this is an example of a difficulty in using probability
theory to assign degree of security.  The author tacitly assumes that
the probability of breaking into any secure host is independent of the
probability of breaking into any other secure host.  If the breakin is
due to a commonly shared feature of the hosts, such as an
implementation weakness, then the probability of defeating a quorum
system may be independent of the number of hosts used as "secure"
servers.  Worse case analysis requires that this probability be
assigned to the security of the system.

In this event, increasing the number of authenticators has no effect
on the degree of security of the system.

--
+-+-+-+     I don't know who I am, why should you?     +-+-+-+
   |        fouts@lemming.nas.nasa.gov                    |
   |        ...!ames!orville!fouts                        |
   |        Never attribute to malice what can be         |
+-+-+-+     explained by incompetence.                 +-+-+-+

bsy@f.gp.cs.cmu.edu (Bennet Yee) (10/28/88)

In article <5257@saturn.ucsc.edu> fouts@lemming. (Marty Fouts) writes:
>
>In article <5241@saturn.ucsc.edu> Michael.Browne@k.gp.cs.cmu.edu writes:
>
>   [ Early exposition deleted. . .]
>
>This appears to be an example of a subtle reasoning flaw I've seen
>several times in breakin probability analysis.  There are two
>problems.  First, there is the apparent tacit assumption that
>arbitrary reduction of probability equates to 0 probability.
>
>More importantly, this is an example of a difficulty in using probability
>theory to assign degree of security.  The author tacitly assumes that
>the probability of breaking into any secure host is independent of the
>probability of breaking into any other secure host.  If the breakin is
>due to a commonly shared feature of the hosts, such as an
>implementation weakness, then the probability of defeating a quorum
>system may be independent of the number of hosts used as "secure"
>servers.  Worse case analysis requires that this probability be
>assigned to the security of the system.

Mike Browne was not the author of the quoted article; I was.  He sent it for
me when the netnews software barfed (he maintains netnews here).  Apparently
the "From" like showed him as the sender, but the article retained my
.signature.

Marty is correct when he says that epsilon is not equal to zero.  It's just
that it can be arbitrarily close.  The problem of independence is similar:
it depends whether you're using the same OSes and whether the OSes or
whether physical security is the weak link.  Since the sole role of the
``secure hosts'' is to distribute publicly accessible data, I claim that the
OSes can be very simple (or even non-existent!).  Furthermore, I claim that
the authentication-puzzle-distribution program as well as the simple OS is
very simple and therefore likely to be bug-free (we can postulate as many
walk-throughs as you'd like [you can see I'm not a great believer in
practical formal verification...]).  Now that the problem clearly is one of
physical security, I think independence is a justifiable assumption.  If you
want to assume that the security department or equivalent has been
compromised (say, in hired spies for security guards), then it's no longer a
computer science problem.  Perhaps I should have been more clear about the
assumptions involved about the puzzle-server hosts.

Later in another article <5256@saturn.ucsc.edu>, Marty Fouts writes:

> Look; where we are chosing to disagree is in how important the
> underlying assumptions are.
> ...
> For example, take the posting to which I am currently replying.  The
> author points out that RSA is well known to be invertable under
> certain circumstances and goes on to describe zero-knowledge
> authentication, including the assumption:
> 
> >> zero-knowledge authentication.  The puzzles can be widely published; only
> >> the solutions are kept secret (local to the appropriate server).  By using
> 
> There is the assumption.  Assuming that the solutions are kept secret
> zero-knowledge authentication. . .  All such systems rely on similar
> simplifying assumptions.  As long as you can afford to make the
> assumptions you can build very powerful systems.

Zero-knowledge authentication puzzles/solution pairs can be easily generated
randomly (using a cryptographically secure random number generator, of
course).  As I mentioned above, since the puzzle servers are simple and the
only worry is physical security, keeping the authentication puzzle solution
for a puzzle server secret is not unrealistic.  Furthermore, if a few puzzle
servers _were_ compromised (but few than the quorum number so system
security is not breached), we can just use the normal quorum consensus
method to distribute the new puzzle for the (replaced) puzzle server.  We
_can_ afford to make those assumptions.

By the way, the only thing proven was that RSA is invertible if you have a
low-order-bit oracle for the plaintext given the ciphertext.  As far as I
know, no such oracle exists.  This means you can look at the result two
ways:  (1) RSA is so secure that not even a single bit of data is leaked;
(2) RSA is so insecure that if a single bit is leaked (and we don't know if
one is or not), the whole thing is blown.

> Further on, the
> author talks about an algorithm by Rabin and Karp which generates a
> fingerprint:
> 
> >> A fingerprint is a cryptographic checksum which
> >> is unforgeable with high probability when the key (the irreducible
> >> polynomial) is unknown (kept secret)
> 
> Bingo.  Another assumption.  In all of these systems, there is an
> underlying assumption that something is kept secret, that the
> technique to do something is unknown or unknowable, or that physical
> security can not be breached.  Time and again these assumptions are
> found false in real systems.

I mentioned fingerprinting in order to use it as a method of creating a
non-corruptible channel of communication -- that is, a channel where data
corruption can occur, but that data corruption is detected with high
probability (with our implementation, the probability is approximately
1-(2^-31)).  The key is randomly generated at connection-setup by the
client.  Unless the key-exchange algorithm leaks the key, the client decides
to publish it, or the machine on which the client runs is insecure, this
session key is secret.

Note again that I make only the standard crytographic assumtion that the
algorithm is known by the opponent; only the key is kept secret.  It is true
that I assume that something is kept secret.  It is NOT true that I assume
that the technique is unknown or unknowable.  Is is also NOT true that I
assume physical security can not be breached -- the assumption about
physical security is that they occur as independent events.

> 
> Finally, the author goes on to describe the use of quorum consensus,
> claiming "we can lower the probability of an intruder breaking our
> system arbitrarily."  This agrees with my original claim.  I said
> systems couldn't be made absolutely secure, only reasonably secure.
> The author is quantitizing the degree of security.

Do you claim that any knowledge-based scheme is insecure?  Any system that
asks for a password, no matter how many random bits long, is ``just''
``reasonably secret'' by your definition.  What is ``absolute security''?

-bsy
-- 
Arpa:	bsy@a.cs.cmu.edu		Bitnet:	bsy%a.cs.cmu.edu@vma.cc.cmu.edu
CSnet:	bsy%a.cs.cmu.edu@relay.cs.net	Uucp:	...!seismo!a.cs.cmu.edu!bsy
USPS:	Bennet Yee, CS Dept, CMU, Pittsburgh, PA 15213-3890
Voice:	(412) 268-7571