[comp.unix.wizards] not using syslogd in the first place

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (08/02/90)

In article <1990Aug1.052525.22007@athena.mit.edu> jik@athena.mit.edu (Jonathan I. Kamens) writes:
  [ best jokes first ]
>   Syslogd doesn't have that problem; syslogd is secure.

An Athena person claiming that one of the least secure logging schemes
in existence is secure?

On this (typical) Sun 4, /dev/log is mode 666, as it has to be to handle
errors from users other than root. But it does *no* authentication!
NONE! ZERO! ZIP! A secure system lets me, e.g., put fake badsu's in the
logs with absolutely no indication of the forgery?

I can flood /dev/log with messages, clogging syslog. That's secure?

If I were a cracker who had just achieved root, I would have to replace
or restart *one* program to avoid *all* future detection. That's right,
all security logging goes through *one* hook. There is *no* reliability.
There is *no* backup. That's secure?

Need I continue?

(Oh, that's right. I forgot. Athena only cares about network security.)

  [ so much for the jokes, on to the silliness ]
> In article <18210:Aug103:35:0890@kramden.acf.nyu.edu>, brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
>>>4. There are some programs that run interactively that need to be able to
>>> both output errors to stderr to the users and to log messages in the system
>>> logs.  For example, su.  How would su print error messages if it couldn't
>>> use stderr because it was piped through an error logging program.
>> The reason that such programs *need* two error streams is security. su
>> should be logging directly to a file, not to a separate daemon that's
>> easy to clog with spurious messages. See A.
>   Great, so it logs directly to a file, and you have to be logged into that
> machine to read the file.

That's really silly.

Actually, you're absolutely right. su can't *both* write to a file and
write to your (network) error logger; that would defeat the structured
programming principle of, uh, ummm, singlemindedness. And once something
is stuck in a file, it's lost forever. It can't be sent over the
network. Files are sinks, not sources. Remember: Never put something in
a file if you ever want to read it again.

> How
> would that facility be provided if syslogd logged directly to a file?

That's really silly. I said that *secure* programs should log directly
to files. (You continue in this confusion below.)

> |> That's really dumb. ``stdin and stdout are controlled by the user. Hence
> |> programs must not read input or produce output.'' Obviously when there's
> |> a security issue the program should be writing directly to files. In
> |> other cases, the user is supposed to be in control. Also see A.
>   No, it's not dumb at all.  Stdin, stdout and stderr are controlled by the
> user, so programs that depend on security should not depend on them.

That's really silly. Read what I said. ``Obviously when there's a
security issue the program should be writing directly to files.'' Then
read the next sentence, which addresses the real issue: ``In other
cases, the user is supposed to be in control.''

You made essentially a blanket assertion that programs should not use
stderr. Like I said, that's really dumb. Feel free to continue the
discussion with dmr@alice.

>   Incidentally, what if "a malicious hacker type" breaks into your system and
> manages to get root, and wants to do something that'll let him continue to dig
> around without you noticing.
  [ all he has to do is restart every daemon with stderr misdirected ]

That's really silly. In a syslog-based system, *all* he has to do is
subvert syslog. Do you admit that it's easier to break one program than
every daemon on the system?

Anyway, we've discussed various aspects of this scenario a lot through
e-mail... What do you think of this: Daemon foo reopens (reconnects,
whatever) stderr as /dev/log by default. (This is done through the
standard library procedure logstderr().) On the other hand, if you say
foo -2, it'll leave stderr alone. Like it?

>   Your whole argument appears to be, "Syslogd is silly, errors should always
> be piped to a program that knows how to deal with them."

No. syslog is an insecure, poorly implemented model that will not handle
future needs.

Does the new Berkeley syslog code remember to always connect to /dev/log
on the first openlog(), hence making flags like NDELAY irrelevant? Just
wondering---otherwise the ftpd problem that started this thread will not
be solved.

>   So far, the ONLY reason I've seen that could explain why syslog/syslogd is a
> "Bad Thing" is the fact that /dev/log disappears after a chroot(),

syslog is amazingly insecure. It does not provide for adding extra flags
to the error messages that can be interpreted in standard ways. It
deludes the programmer into not worrying about what happens when stderr
blocks. It focuses a major aspect of security (namely, error logging) on
a single, easily subverted point. It does not let the user control
noncentrally where error messages are sent---so that I can't run a
straight telnetd on a separate port with a different login program,
because it stupidly syslog()s all errors through the usual file, without
even an indication that it's the nonstandard version. It is too complex
for simple tasks---it doesn't provide a single, uniform model for all
error messages. (Don't use perror, use syslog! :-) ) It does not allow
more complicated, text-based separation and analysis of messages.

Whoops, I just took more than half a second to think up that last one. I
guess I'll stop here.

> |> No. See B and C. If you want, you can set up named pipes or sockets
> |> /dev/log*, each feeding into a different type of error processor; as
> |> this special case of my ``proposal'' is a generalization of what syslog
> |> does, your efficiency argument is silly.
>   Excuse me, but wasn't "special devices in /dev" one of the reasons you gave
> for proposing this change in the first place?  How have we reduced complexity
> by going from one socket, /dev/log, to several sockets, /dev/log*?

Silly. If there's just one error processor (syslogd) then there's just
one /dev/log. I'm only pointing this out because it proves that sensible
stderr use includes syslog as a special case. Hence stderr is more
flexible.

>   Furthermore, I don't see named pipes anywhere on my BSD system.  Granted,
> they should be there, but they aren't, and BSD4.3 isn't the only Unix without
> named pipes (then again, there are also Unices without sockets, so this is
> sort of a red herring :-).

Yes, it is a red herring.

> |> > 3. Under your scheme, every time I start up a process that I want to log
> |> >    messages, I have to pipe its stderr though this logging process of yours. 
> |> Ever heard of shell scripts? And see A.
>   So every daemon is going to have to have a shell-script front-end to it? 
> That means more files on the system that don't really need to be there, and
> slower start-up time for the daemons.

Well, how do you like my foo/foo -2 idea? (Which you would have thought
of yourself, had you looked at general point A like I said.)

---Dan

jik@athena.mit.edu (Jonathan I. Kamens) (08/02/90)

  I'm not going to reply point-by-point to your message, because I have to get
ready to leave town tomorrow, and because frankly, I'm tired of the whole
discussion.

  However, I just want to say that I think it is possible for us to reach an
agreement on this; I won't be around to see whether you agree since (as I
said) I'm leaving towm tomorrow, so if you want to talk to me more about it,
we'll have to do it in E-mail.

  You have listed a number of deficiencies in the syslog() mechanism.  I
agree.  Syslog is broken in many ways.  I agree with you that that brokenness
introduces problems which certainly need to be addressed.

  As I look back over this conversation, I see where I said, "This is how you
fix syslog," and where you responded with, "Well, sending programs to stderr
to a program that deals with them would be more appropriate."  I said, "What,
you mean an extra process running for every daemon running?" and you said,
"Well, just have them all connect to one named pipe so that there's only one
process running, or have different pipes for different sorts of error handling
facilities."  I said, "What, programs can't write to stderr and log errors?"
and you said, "Well, programs that need to write errors to stderr can do that
and log their other errors directly to a file."  I said, "We'll have to
redirect stderr when starting every program," you said, "Just have every
program go to /dev/log by defualt."  I said, "If programs log to a file, then
there's no way for me to get those logging messages on another machine," you
said, "Well, why can't it log to the network as well?"  I said, "Well, you
have to configure each program separately for how it does logging," and you
said, "Well, you can use a central configuration file."

  Frankly, it seems to me that as I offered more and more objections, the
responses from you described more and more closely one thing -- syslogd. 
There's /dev/log for programs to get in touch with the logging program. 
There's a central configuration file.  There's one process logging errors for
multiple daemons.  There's network logging and local logging.

  I don't think you're proposing an alternative to syslogd.  I think you're
proposing an *enhanced* syslogd with the problems in syslogd fixed, new
features added to the daemon, and the library interface between syslogd and C
programs enhanced to make those new features available to programmers.

  Perhaps you don't see the difference, but I do.  To say, "Syslogd is
brain-damaged, let's throw it out completely because whoever wrote it must
have been on LSD," is one thing.  To say, "There are problems with syslogd,
let's fix them," is quite another.

  Frankly, if I was going to reimplement things from scratch, I'd use a
Mach-like philosophy -- establish a default error-handling port in a top-level
process (e.g. init) so that all processes subsequently created inherit that
port, and processes that want to change their error-handler can do so, while
processes that don't can use the default one.  Where the port goes, and what
happens at the other end, are transparent to the programmer -- he just knows
what messages he's allowed to send to the port.  If the protocol is defined
properly, it will be extensible so that people can add things without breaking
what's there already.  Of course, there'll have to be an unspoofable way for a
process to say, "I want the default error port back, no matter what's there
now," so that I can't do things like write my own shell that changes the error
handler and then run su from it so that you never see the errors.

  One of the biggest advantages of doing this way is that programs don't have
to "connect" to the error handler by opening /dev/log or anything like that;
it's already there.  Furthermore, you don't need any shell scripts to redirect
/dev/log.

  Of course, to do things this way, we would need everybody running Mach :-). 
In the mean time, I guess the next best thing is to have everyone agree to
rendezvous at a socket or something and treat that as the error-handling port.
Sort of like syslog, no?

  Now, this sounds a lot like the "error-handling" port is "stderr", but no,
that's not what I mean.  I still maintain that stderr has a very specific
purpose -- the transmission of errors to the USER, or to where the USER wants
them to go.  I think there has to be a difference between where user errors go
and where logging errors go.  If that's our only disagreement, perhaps we can
agree to disagree :-).

Jonathan Kamens			              USnail:
MIT Project Athena				11 Ashford Terrace
jik@Athena.MIT.EDU				Allston, MA  02134
Office: 617-253-8495			      Home: 617-782-0710

Makey@Logicon.COM (Jeff Makey) (08/02/90)

In article <4559:Aug121:33:5590@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
>I can flood /dev/log with messages, clogging syslog. That's secure?
>
>If I were a cracker who had just achieved root, I would have to replace
>or restart *one* program to avoid *all* future detection. That's right,
>all security logging goes through *one* hook. There is *no* reliability.
>There is *no* backup. That's secure?

Except when "security through obscurity" actually succeeds, the idea
that a UNIX system can in any way be protected from someone with root
access is completely absurd.  Naturally, any standard method of
exception logging (e.g., stderr, syslog) will be insufficiently
obscure to provide the desired security.

From a security point of view, there are no redeeming features
whatsoever in logging to a file (via stderr in Dan's implementation)
in the face of root access.  On the other hand, if logging is done to
a remote machine then there is a possibility of at least *detecting* a
break-in (assuming, of course, that the loghost is not compromised).

                           :: Jeff Makey

Department of Tautological Pleonasms and Superfluous Redundancies Department
    Disclaimer: All opinions are strictly those of the author.
    Internet: Makey@Logicon.COM    UUCP: {nosc,ucsd}!logicon.com!Makey