[comp.soft-sys.andrew] Convert to AMS?

paw@northstar89.dartmouth.edu (Pat Wilson) (07/03/90)

I'm contemplating converting our mail system to AMS in the hopes
that that would stop our constant mail (mainly flock) problems.
We've got about 100 workstations on 3 AFS servers.  I'd like to
hear pros and cons of AMS - I've heard it's *big*, and a pain to
administer, but I'm willing to put up with some amount of hassle
if it'll solve our mail problems in a _robust_ way.

If you've faced a similar decision, I'd be interested in knowing
what you did about it and why.

Thanks.

Pat Wilson
Operations Manager, Project NORTHSTAR
paw@northstar.dartmouth.edu

Craig_Everhart@TRANSARC.COM (07/03/90)

It's total craziness to run ordinary mailers on top of AFS.  That's why
there's an AMS at all.

AMS is not that big.  It is a mail system, however, so it will require
some administration; because it's unfamiliar, those administrative tasks
may seem to be ``a pain.''

The only two AMS installations using AFS are the ones at andrew.cmu.edu
and at transarc.com.  The andrew.cmu.edu installation supports about
10^5 users, and anything at that magnitude will require occasional
intervention.  The transarc.com installation supports far fewer, and the
local sysadmin with the responsibility of running it tells us that ``it
runs itself.''

Let's review the AMS pieces, if for no other reason than than as a basis
for discussion.  I'll do this in a layered fashion.

----------------
Bare AMS, no ATK, no AFS, no WP, no SNAP, no AMDS.  You get CUI and VUI
and maybe BatMail; these are multiple interfaces (line-oriented,
screen-oriented, Emacs-subprocess-oriented) into the same database of
messages.  You get Flames processing of incoming mail into idiosyncratic
personal folders.  You get the ability to define a suite of public
bboards--actually, folders that are as public or as private as the file
system (and your set of groups) can make them.  Essentially, you get the
ability to make a public name space of bboards.  But, to be fair,
there's probably no reason to prefer this setup over MH, or Mush, or
maybe even /usr/ucb/Mail.

Add WP (White Pages): you can get fuzzy matching of your local names.

Add SNAP: you can support tiny PCs and Macintoshes, so that they run
full clients of the entire message system.

Add ATK: you get multi-media mail.  This is perhaps the biggest step of
all.  While CUI and VUI are not tiny programs, Messages (the ATK
interface), as a full ATK client, is large.  Older,
pre-X.V11R4patchlevel4 (or so) Messages versions grow even larger.

(Add AFS: you get big-site distribution of your publicly-named bboard
tree.  You get location transparency.)

Add AMDS (``AMS delivery'', Andrew Message Delivery System): requires WP
and exploits AFS.  You get reliable local message delivery in spite of
distributed-system transient outages.  You get WP (and fuzzy name
matching) integrated with your local mail delivery mechanism, so
external users get the same fuzzy name matching (with ambiguous or
overly-fuzzy matches bounced).

Add netnews support: you keep a copy of netnews in AFS (centrally, so
everybody can fetch it).  It looks like a public suite of AMS folders.
----------------

Now, lots of folks out there run with basic-AMS plus ATK (i.e.
Messages), so they get multi-media mail, but still use something like
sendmail for all delivery.

The Transarc installation is sort of the converse: ATK isn't widely
used, so we use AMS (CUI+VUI), AMDS, WP.  For many people, VUI is not a
graceful interface; there are other interfaces under development at CMU,
though.  You can use Messages with the Transarc installation, but ATK
isn't built locally for all platforms.

AFS, like any distributed system, introduces transient failures.  A mail
system needs to behave autonomously, though, and it can't be bothered to
parse error messages appearing on /dev/console.  AMDS, and the part of
AMS that actually deals with the representation of folders, are rather
highly tuned to deal with these transient outages in a graceful way,
since they all developed together.

Case history: Transarc's installation.  Initially, incoming mail was
stored in a publicly-writable version of /usr/spool/mail/userid, in AFS;
``/usr/spool/mail'' on all machines was a symlink to an AFS directory. 
Mail was available, more or less, but the automatic attempts to deliver
the mail could (and did) fail silently.  Second generation was to put
all the incoming mail on one central machine's local disk, much as lots
of shops that run sendmail+NFS do.  Since we weren't fundamentally
hooked up with NFS, that means that everybody would telnet to the
central machine to process their incoming mail.  Not bad, but it doesn't
scale to a large organization, and with even a small one, the little
central machine started to grind ever more slowly.  Third generation was
to install AMDS, so that incoming mail is stored in users'
~userid/Mailbox directories, in AFS, and you manipulate that mail with
AMS clients.  The only mail-related processes now running on the central
machine are AMDS ones: there are three central queues (two immedate
(``fast''), one background (``slow'')), a queuemail daemon for each, a
sendmail installation on only one machine for long-haul non-AFS mail,
and a CUI daemon providing processing for an entire local bboard suite.

I, and many other recipients of this list, will be happy to try to
answer any further questions.

		Craig

nsb@THUMPER.BELLCORE.COM (Nathaniel Borenstein) (07/03/90)

Excerpts from internet.info-andrew: 3-Jul-90 Re: Convert to AMS?
Craig_Everhart@transarc. (4762+0)

> The only two AMS installations using AFS are the ones at andrew.cmu.edu
> and at transarc.com.  

Um, actually there's an IBM site (Rochester, Minnesota) that uses the
whole shebang and has at least twice as many bulletin boards as CMU --
over 4000 bboards, last I heard!  I believe they decided they needed the
AMS delivery system (AMDS) not too long after they starting using AFS in
a big way.

My gut feeling is that wherever AFS goes, AMDS will eventually have to
follow.  -- NB

jl57+@ANDREW.CMU.EDU (Jay Laefer) (07/04/90)

Ummm, Craig, I think you meant 10^4 users, not 10^5.  I can think of a lot of
people who'd be just a little surprised to find 100,000 users at this cell.

We've currently got a little over 9000 users at then andrew.cmu.edu cell.
I have to admit to being pretty happy with the way our local mail is handled.
My only complaint is with "messages" which bogs down and IBM RT and a Sun 3/50.
(I have a Sun 3/50 on my desk.)  Fortunately, CMU is in the process of
installing DECstation 3100's all over campus, and "messages" runs wonderfully
on those.

	-Jay