[comp.unix.questions] wiretapping techniques

roberts@cmr.icst.nbs.gov (John Roberts) (07/25/88)

With few exceptions, I think the following can be considered true:
  1) A network can not be considered secure if the physical links are 
     not secure.
  2) Unless you have taken extraordinary measures, your equipment is
     probably susceptible to electronic eavesdropping. These measures are
     generally expensive, and unlikely to be implemented except at
     sensitive sites. Some of these measures are described in the appropriate
     government documents (which may be classified).

I think that open discussion of weak points and breakin techniques is likely
to cause much more harm than good, since not everyone will be willing and
able to take measures, and presumably a number of people who are willing but
unable to break into other systems will take advantage of the information. I
do not object to general cautions, but discussion of specific techniques to
break security seem to be way out of line. (Or perhaps I'm wrong, and we
should start posting circuit diagrams of spy equipment :-)

As an example of a more reasonable approach, if you should happen to 
discover a way to break into any Unix system, DO NOT post it to the net
as a public service. You might quietly send a note to the designers, and
they might come up with a patch and quietly distribute it, perhaps without
even saying what it's for, and everyone can laugh about the situation
afterward. For situations where the problem is unlikely to be fixed
(wiretapping, etc.), EXTREME caution should be used in informing the users 
that there is a security problem.

<Standard disclaimers.>                        John Roberts
                                               roberts@cmr.icst.nbs.gov

jwm@stdc.jhuapl.edu (Jim Meritt) (07/27/88)

In article <16625@brl-adm.ARPA> roberts@cmr.icst.nbs.gov (John Roberts) writes:
}As an example of a more reasonable approach, if you should happen to 
}discover a way to break into any Unix system, DO NOT post it to the net
}as a public service. You might quietly send a note to the designers, and
}they might come up with a patch and quietly distribute it, perhaps without
}even saying what it's for, and everyone can laugh about the situation
}afterward. For situations where the problem is unlikely to be fixed
}(wiretapping, etc.), EXTREME caution should be used in informing the users 
}that there is a security problem.


You look at comp.risks & sci.crypt?




Disclaimer: Individuals have opinions, organizations have policy.
            Therefore, these opinions are mine and not any organizations!
Q.E.D.
jwm@aplvax.jhuapl.edu 128.244.65.5  (James W. Meritt)

PAAAAAR%CALSTATE.BITNET@cunyvm.cuny.edu (07/27/88)

Since there are so many ways of observing a logon sequence and then
duplicating it, a high security system needs to implement a changing
logon sequence so that what lets a valid user into the system today
does not allow entrance to a black hatted person the following day.

People have published two distinct variations on this theme.
First - for machine to machine security (including a smart card as a machine)
Second - for human to machine login sequences.

A relative of this is the "pass algorithm" (I don't at this time recall
who suggested it).  The systemn that logs in is given some information
and must respond to it in the correct way.

The second technique is based on storing a number of questions (say 10)
and 10 encrypted answers. On logging in the machine asks a collection
of randomly chosen questions and reads replies that are checked against
the encrypted dossier of information for the person who is
putatively loggin in...


Someone else has proposed an intriguing variant.  This is the
    "Pass Algorithm"
The user (person or system logging in) has memorized an algorithm
which is applied to input provided by the system to which they are
attempting to gain access.  The input is generated randomly.

Has anyone implemented any of these variations on UNIX?
Dick Botting
PAAAAAR@CCS.CSUSCC.CALSTATE(doc-dick)
paaaaar@calstate.bitnet
PAAAAAR%CALSTATE.BITNET@{depends on the phase of the moon}.EDU
Dept Comp Sci., CSUSB, 5500 State Univ Pkway, San Bernardino CA 92407
Disclaimer: What with my brain, my fingers, this Mac, Red Ryder,
            the PDP and its software, NOS and the CSU CYBERS,
            plus transmission errors, your machine, terminal,
            eyes, and brain,.....
       I probably didn't think what you thought you just read any way!

PAAAAAR%CALSTATE.BITNET@cunyvm.cuny.edu (07/27/88)

Correction to previous message

insert
In the first case  the logout procedure includes the establishment of
part of the password for the next login. The machine that logs in
stores a suitably disguised version ready for the next conversation.

delete
A relative....correct way

(If you where asked
"Enter S to send your message, E to edit, or C to cancel
"
and you input 'E' would *you* expect it to send the d*** message...
THnkyou CYBER MAIL.
)
Dick Botting

daveb@geac.UUCP (David Collier-Brown) (07/27/88)

From article <16625@brl-adm.ARPA>, by roberts@cmr.icst.nbs.gov (John Roberts):
> I think that open discussion of weak points and breakin techniques is likely
> to cause much more harm than good, 

   Only in the short run!

   Regrettably, people are human.  If you want a given level of
security (of data) and don't have it, you typically have to
**demonstrate** that you don't have it.  However, to demonstrate
this you have to threaten security... yourself.

   This can get you in trouble.  In fact, the test to prove that you
**do** have a given level of security can get you in trouble! 

  One of the basic tenets of "orange book" security is that the
means used to ensure security are to be publicly known.  This does
not extend to detailed schematics of hardware to open a covert path,
but it does strongly suggest that known weaknesses should be
reported.  
  Have a look in the security discussion group, the literature of
computer security, etc. for further support of "security by design,
not by obfustication"...


 --dave (B1 on a workstation) c-b

ps to John: sorry if this sounds like a flame: It's not, it's just 
a common-mode error that I get **real** annoyed at hearing
made again and again... (:-{)
-- 
 David Collier-Brown.  {mnetor yunexus utgpu}!geac!daveb
 Geac Computers Ltd.,  |  Computer science loses its
 350 Steelcase Road,   |  memory, if not its mind,
 Markham, Ontario.     |  every six months.

ron@topaz.rutgers.edu (Ron Natalie) (07/27/88)

All the good security bugs out there today involve poorly designed
network code.  I worked on an Army "Tiger Team" project.  I nearly
never broke in through the main login system.  It nearly always
involved either some network back door, or compromising some non-priv
user to exploit system bugs to become priveledged (for example, the
proliferation of "field service" accounts that have trivial passwords).

Besides, as far as most people concerned, wire tapping the data on the
wire is as concerning as people actually being able to log in without
authorization.

-Ron

daveb@geac.UUCP (David Collier-Brown) (07/29/88)

From article <16641@brl-adm.ARPA>, by PAAAAAR%CALSTATE.BITNET@cunyvm.cuny.edu:
> In the first case  the logout procedure includes the establishment of
> part of the password for the next login. The machine that logs in
> stores a suitably disguised version ready for the next conversation.
...

  Another, more robust mechanism is the "trusted path". There is a
physical signal (usually a break) that the security kernel always
honors, placing the user in touch with known, trusted software to
start a logon sequence.
  A dedicated wire is even better, if you're on a workstation: you
press the magic button and the computer says "enter password to
login"

--dave (B2 on a desktop) c-b

-- 
 David Collier-Brown.  {mnetor yunexus utgpu}!geac!daveb
 Geac Computers Ltd.,  |  Computer science loses its
 350 Steelcase Road,   |  memory, if not its mind,
 Markham, Ontario.     |  every six months.