FC01@USC-ECL (12/31/82)
From: FC01 <FC01@USC-ECL> Date: 30 Dec 1982 0756-PST I just wanted to point out the types of security that exist, what they are good for, and why it is that systems are very hard to make secure. 1 Physical separation - the strongets security this makes it real hard to tamper since no access at all is allowed it also makes it real hard to get anything done for the same reason 2 encription of information this can make it arbitrarily improbable to figure out the meaning of data. it also takes additional cpu time to use encryption and can be a pain if you forget the codes 3 logical separation this is the use of the OS to try to separate things for you. The main problems here are that the OS is written by people (falable, bribable, etc.) and is therefor imperfect. In addition, there is a real need for operators to be able to access any file in the system for maintenance purposes. 4 trust If you trust the people you share resources with, security is knowing that they wouldn't do anything bad to you anyway. Since 1 has the major disadvantage of not being able to get anything done, and 3 and 4 are so falable, 2 seems the only way to protect onesself. If you have the input stream decoded by program, you can fool even the slickest kmem hackers. If you have the output encoded you can even fool yourself. If kmem itself was coded so that only certain areas had useful information after decoding (through the public device driver), it would be hard to watch others. Other schemes are also viable, but the main point is that if you want protectioon, use codes and do it yourself, don't trust others. -------
ron%brl-bmd@sri-unix.UUCP (07/01/83)
From: Ron Natalie <ron@brl-bmd> Of course, the most interesting security botch was our Cyber system. It used to have a command that told you what users had the same password that you did. -Ron
kent%Shasta%sumex-aim@decwrl.UUCP (07/01/83)
From: Chris Kent <decwrl!kent%Shasta@sumex-aim> Sorry, Greep, that hack went away with version 7. Passwords now have a two character "salt", which is part of the date and time tacked onto the plaintext; then the whole thing is encrypted together, and the plaintext of the salt is appended to the cyphertext. So even if you have the same password on many machines, the cyphertext won't show it. Cheers, chris
edhall%rand-unix@sri-unix.UUCP (07/01/83)
UNIX `salts' its passwords with a 12-bit random number so that identical plaintext has only a one-out-of-4096 chance of producing the same cyphertext. The first two characters of the encrypted password represent this `salt'. The salt is used to permute a lookup table in the DES encryption algorithm. Modifying the DES algorithm used for password encryption in this way also keeps someone from making a fast password-search device using a DES chip (unless the salt just happened to be that one-out-of-4096th combination that corresponds to the actual DES standard; perhaps this particular salt should be inhibited). -Ed
ron%brl-bmd@sri-unix.UUCP (07/01/83)
From: Ron Natalie <ron@brl-bmd> Unix had that. An additional randomizing factor was the seed in the V7 and later password crypts. We have for a long time kept our passwords in a non-readable file, leaving the password field in /etc/password blank. -Ron
BRUCE%umdb@sri-unix.UUCP (07/02/83)
From: Bruce Crabill <BRUCE@umdb>
I have never understood the reason behind the "salt" in the password
ecription. I understand that it was to help prevent duplicate ciphertext
when two users had the same password, but why not just take the userid and
encript it with the user's password and place the resultant ciphertext in
the password file? I also agree with Ron Natalie about the concept of keeping
the passwords in a non-readable file. Seems like the best way to avoid
problems.
Bruce
ARPANET: BRUCE%UMDB.BITNET@BERKELEY
BITNET: BRUCE@UMDBSJOBRG.ANDY%MIT-OZ%mit-mc@sri-unix.UUCP (07/02/83)
If you suspect that you have the same password as someone, you can just encrypt your password using their salt and get identical encrypted passwords if the plaintext passwords are the same. -andy
MCLINDEN@RUTGERS.ARPA (07/07/83)
From: Sean McLinden <MCLINDEN@RUTGERS.ARPA>
Not meaning to beat a dead horse, I think that it would be worthwhile
to distinguish between those playful users who are "friendly" but
mischievous and those who (if given the opportunity), would do harm
to a system. In the first category might be included co-workers and
system programmers (hackers). In an office setting, it
may be nearly impossible to keep passwords a secret and anyone who has
access to the machine console could easily bring the system down and
back up in a single user mode, security notwithstanding. Having been
both a system administrator and a programmer it seems to me that
securing a system from the playful but trusted user is more a matter
of education and less one of heavily guarding machine and system
secrets (which is all but impossible anyway).
The non-trusted user is a different story. Amost anyone with the
desire can learn the inner workings of UNIX. Unlike IBM and
(to a certain extent, DEC operating systems), practically
eveyone who has UNIX license (educationally), has a source license
and the sources are easy to get a hold of. The idea of creating
restricted shells has been mentioned before and is frought with
possibilities. For example, consider the following (very
trivial), example:
/* newroot.c */
main()
{
chroot ("/usr/guest");
execlp ("csh" , "csh" , "-f " , 0 , 0);
}
This program, run setuid root, will create a shell (csh), whose
idea of "root" is /usr/guest. Done properly,
chroot (which exists in 4.1 but which isn't documented), could
be used to create systems with their own "super user",
their own password files, and assuming that these separate
roots can run init(), a system could be created
completely secure from other systems on the same machine. The
drawback is, of course, that certain files and utilities would
have to be duplicated for each system. On the other hand this
may be one mechanism for isolating potential trouble spots
from the entire system.
Sean McLinden
-------greep%su-dsn@sri-unix.UUCP (07/07/83)
Unix has that too. You just look at the password file for someone with the same encrypted password as you, and it's likely that the plaintext is also the same. This scheme can also have its advantages; if I want to set up a login for someone remote, who doesn't want to have to send me his initial password in a message or have the account set up with no password, he can send me the encrypted version and I can just install it that way without knowing the plaintext at all.
gwyn%brl-vld@sri-unix.UUCP (07/07/83)
From: Doug Gwyn (VLD/VMB) <gwyn@brl-vld> If the encrypted text is the same, then under the modified-DES scheme I am sure the plaintext is also the same. However, the "salt" helps a lot since usually the same plaintext password for different users would encrypt to different salted text. Now, if two people are SUSPECTED of having the same password (say, a husband and wife), then on that assumption it will be much easier to break the encryption even though different salts were used.
pc@ukc.UUCP (07/08/83)
If people are REALLY WORRIED about the decryption of passwords
why not move the passwords to another file, which is read-only
by root. After all only passwd and login need to access the file
and both of them are setuid.
At UKC, we have user populations of 600-700 and have totally
replaced the password file by a binary file with some integral
number of bytes per user. This means that random access can be used
to access an individual entry. Other keys to the password file (such
as login name and in our case the user's system id) are abstracted
to a set of very small files which are kept in uid
order - these files can be opened and searched very easily.
For compatibility purposes we generate /etc/passwd every night (with
no passwords) and passwords are never printed even in their encrypted
form.
One of the benefits of a binary password file is that the record for
each user can be much bigger. We currently store a set of limits
which are applied at login time and we plan to put in the set of
groups which can be used for 4.1c/4.2.
Peter Collinson
{mcvax, vax135} !ukc!pctim@unc.UUCP (07/10/83)
It is not true that only login and passwd need to read /etc/passwd. The GCOS field is used for maintenance of a user information database on many systems, requiring that the file be readable by finger as well. Of course, finger could be made setuid to root, or a different file could be used for the database. ______________________________________ The overworked keyboard of Tim Maroney duke!unc!tim (USENET) tim.unc@udel-relay (ARPA) The University of North Carolina at Chapel Hill
guy@rlgvax.UUCP (07/10/83)
1) Anybody out there know *why* the 4.1BSD manuals don't document "chroot"?
The V7 manual does, and the System III and System V manuals do.
2) On a vanilla V7 system "chroot" is *not* secure. You can reference above
your fake root with "..". This bug has been fixed in 4.1BSD and in System III
and later USG releases. In fact, there is an undocumented feature of the
System III "login"; if the user's login shell begins with "*" (or is "*"),
"login" changes the root to the home directory specified in the password file,
prints "Subsystem root: <that_directory>", and attempts to run "/etc/login"
and, if that fails, "/bin/login" from the new root. The System V login does
all this (which implies it wasn't just a hack) and also sticks the string
<!sublogin> in the environment (that's right, a string in the environment with
no "=" in it!). My interpretation of this is that you put an entry for the
*subsystem*, not for the *user*, in the password file (i.e., if you had a
subsystem called "anonymous", you would have:
anonymous:<encrypted subsystem password>:<uid>:<gid>:<name>:/anonymous:*
in the password file. Then you would put the password file for the anonymous
user subsystem in "/anonymous/etc/passwd", and either a copy of/link to
"/etc/login" or a special login program in "/anonymous/etc/login". Is this
how it is intended to be used? And why is it not documented in the System III
or System V documentation?
Guy Harris
{seismo,mcnc,we13,brl-bmd,allegra}!rlgvax!guybob%ucla-locus@sri-unix.UUCP (07/13/83)
From: Bob English <bob@ucla-locus> I believe the proposal was to remove the password from /etc/passwd and place it in a separate, non-readable file. --bob--
rehmi@umcp-cs.UUCP (07/16/83)
As far as 4.x bsd and chroot(), there is no danger. namei() checks
whether you are in your root dir and throws away ".."s if you are.
-Rehmi
--
By the fork, spoon, and exec of The Great Basfour.
Arpa: rehmi.umcp-cs@udel-relay
Uucp:...{allegra,seismo}!umcp-cs!rehmifostel@ncsu.UUCP (08/02/83)
Does anyone have any advice on mods to make to patch holes in Eunice
running under VAX/VMS? Theory on where to look for holes or specific
patches all gratefully accepted. In deference to those who would like
to keep these things under our hats, please send mail to me unless
the problem is really neat. If you want to know what turns up, send
me mail and I will forward any good ideas (for plugging, not violating!)
to you privately. For a bit of context, this is part of an attempt to
lever a VMS center organization into at least tasting UNIX, without the
corporate paranoia about loosing everything on their machines to evil
Eunice hackers. Thanks.
----GaryFostel----
...!decvax!duke!mcnc!ncsu!fostelkaiser@jaws.DEC (Pete Kaiser 225-5441 HLO2-1/N10) (12/09/84)
I know of no widely-used OS whose security scheme doesn't ultimately rest in
the hands of at least one trusted administrator. If that administrator isn't
trustworthy, the system can be structurally wonderful and it won't mean a
thing.
Several years ago I worked as a consultant for a quasi-governmental agency that
whose computer services were provided by a computer center that was nominally
a consortium administered by a committee of the technical heads of the agencies
that owned it. In fact the system manager of the computer center had the whole
bunch completely intimidated with his technical knowledge, and they left mat-
ters entirely in his hands. This wasn't clear to me yet at the time the tech-
nical head of my agency asked me to write an "appreciation" of the quality of
service the agency was getting. It was poor. The reasons were many and easily
documented, and I did it; after all, the chief told me in these words "not to
pull [my] punches." When he got my report he promptly gave a copy, complete
with my signature, to the computer center. But I didn't know that.
There came a time, though, when I was having just too much trouble getting my
technical work done, because response time was so poor. There were times
when I'd press a key and for minutes nothing would happen. But when I would
talk with other programmers, they felt that response time was no worse than
what they had come to expect. So I began noting down instances and times,
and eventually turned this information into a memo to my employers. They
took the matter up with the computer center. Events at this point went amok,
and when the dust settled a little, I learned that the computer center's man-
ager had been monitoring everything I did on the computer. He had done this by
installing a patch in the operating system which monitored every login, and
when it was me, journalled everything to a tape drive he reserved for the
purpose. Those minutes-long pauses in response time had been at times when
contention elsewhere in the system locked out the tape drive -- and therefore
my process as well.
Last I heard, he was still on the job. I left ... and on my own steam.
---Pete
Kaiser%JAWS.DEC@decwrl.arpa, Kaiser%BELKER.DEC@decwrl.arpa
{allegra|decvax|ihnp4|ucbvax}!decwrl!dec-rhea!dec-jaws!kaiser
DEC, 77 Reed Road (HLO2-1/N10), Hudson MA 01749 617/568-5441