[comp.std.unix] Standards Update, USENIX Standards Watchdog Committee

jsh@usenix.org (Jeffrey S. Haemer) (01/08/90)

From: Jeffrey S. Haemer <jsh@usenix.org>


            An Update on UNIX* and C Standards Activities

                            December 1989

                 USENIX Standards Watchdog Committee

                   Jeffrey S. Haemer, Report Editor

USENIX Standards Watchdog Committee Update

The reports that accompany this summary are for the Fall meeting of
IEEE 1003 and IEEE 1201, conducted the week of October 16-20, 1989, in
Brussels, Belgium.  (This isn't really true of the 1003.4 and 1003.8/1
reports, but let's overlook that.)

The reports are done quarterly, for the USENIX Association, by
volunteers from the individual standards committees.  The volunteers
are familiarly known as ``snitches'' and the reports as ``snitch
reports.'' The band of snitches and I make up the working committee of
the USENIX Standards Watchdog Committee.  The group also has a policy
committee: John S. Quarterman (chair), Alan G. Nemeth, and Shane P.
McCarron.  Our job is to let you know about things going on in the
standards arena that might affect your professional life - either now
or down the road a ways.

More formally:

     The basic USENIX policy regarding standards is:

          to attempt to prevent standards from prohibiting innovation.

     To do that, we

        o+ Collect and publish contextual and technical information
          such as the snitch reports that otherwise would be lost in
          committee minutes or rationale appendices or would not be
          written down at all.

        o+ Encourage appropriate people to get involved in the
          standards process.

        o+ Hold forums such as Birds of a Feather (BOF) meetings at
          conferences.  We sponsored one workshop on standards.

__________

  * UNIX is a registered trademark of AT&T in the U.S. and other
    countries.

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 2 -

        o+ Write and present proposals to standards bodies in specific
          areas.

        o+ Occasionally sponsor White Papers in particularly
          problematical areas, such as IEEE 1003.7 (in 1989) and
          possibly IEEE 1201 (in 1990).

        o+ Very occasionally lobby organizations that oversee standards
          bodies regarding new committees, documents, or balloting
          procedures.

        o+ Starting in mid-1989, USENIX and EUUG (the European UNIX
          Users Group) began sponsoring a joint representative to the
          ISO/IEC JTC1 SC22 WG15 (ISO POSIX) standards committee.

     There are some things we do _n_o_t do:

        o+ We do not form standards committees.  It's the USENIX
          Standards Watchdog Committee, not the POSIX Watchdog
          Committee, not part of POSIX, and not limited to POSIX.

        o+ We do not promote standards.

        o+ We do not endorse standards.

     Occasionally we may ask snitches to present proposals or argue
     positions on behalf of USENIX.  They are not required to do so
     and cannot do so unless asked by the USENIX Standards Watchdog
     Policy Committee.  Snitches mostly report.  We also encourage
     them to recommend actions for USENIX to take.

          John S. Quarterman, Chair, USENIX Standards Watchdog Committee

We don't yet have active snitches for all the committees and sometimes
have to beat the bushes for new snitches when old ones retire or can't
make a meeting, but the number of groups with active snitches is
growing steadily.  This quarter, you've seen reports from .1, .4, .5,
.6, .8/2, and a belated report of last quarter's .8/1 meeting, as well
as a report from 1201.  Reports from .2 and .7 are in the pipeline,
and may get posted before this summary does.  We have no reports from
.3, .8/[3-6], .9, .10, or .11, even though we asked Santa for these
reports for Christmas.

If you have comments or suggestions, or are interested in snitching
for any group, please contact me (jsh@usenix.org) or John
(jsq@usenix.org).  If you want to make suggestions in person, both of
us go to the POSIX meetings.  The next set will be January 8-12, at
the Hotel Intercontinental in New Orleans, Louisiana.  Meetings after
that will be April 23-27, 1990 in Salt Lake City, Utah, and July 16-
20, 1990 in Danvers (Boston), Massachusetts.

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 3 -

I've appended some editorial commentary on problems I see facing each
group.  I've emphasized non-technical problems, which are unlikely to
appear in the official minutes and mailings of the committees.  If the
comments for a particular group move you to read a snitch report that
you wouldn't have read otherwise, they've served their purpose.  Be
warned, however, that when you read the snitch report, you may
discover that the snitch's opinion differs completely from mine.

1003.0

Outside of dot zero, this group is referred to as ``the group that
lets marketing participate in POSIX.'' Meetings seem to be dominated
by representatives from upper management of large and influential
organizations; there are plenty of tailor-made suits, and few of the
jeans and T-shirts that abound in a dot one or dot two meeting.
There's a good chance that reading this is making you nervous; that
you're thinking, ``Uh, oh.  I'll bet the meetings have a lot of
politics, positioning, and discussion about `potential direction.'''
Correct.  This group carries all the baggage, good and bad, that you'd
expect by looking at it.

For example, their official job is to produce the ``POSIX Guide:'' a
document to help those seeking a path through the open-systems
standards maze.  Realistically, if the IEEE had just hired a standards
expert who wrote well to produce the guide, it would be done, and both
cleaner and shorter than the current draft.

Moreover, because dot zero can see the entire open systems standards
activities as a whole, they have a lot of influence in what new areas
POSIX addresses.  Unfortunately, politics sometimes has a heavy hand.
The last two groups whose creation dot zero recommended were 1201 and
the internationalization study group.  There's widespread sentiment,
outside of each group (and, in the case of internationalization,
inside of the group) that these groups were created at the wrong time,
for the wrong reason, and should be dissolved, but won't be.  And
sometimes, you can find the group discussing areas about which they
appear to have little technical expertise.  Meeting before last, dot
zero spent an uncomfortable amount of time arguing about graphics
primitives.

That's the predictable bad side.  The good side?  Frankly, these folks
provide immense credibility and widespread support for POSIX.  If dot
zero didn't exist, the only way for some of the most important people
and organizations in the POSIX effort to participate would be in a
more technical group, where the narrow focus would block the broad
overview that these folks need, and which doing the guide provides.

In fact, from here it looks as though it would be beneficial to POSIX
to have dot zero actually do more, not less, than it's doing.  For
example, if dot five is ever going to have much impact in the Ada

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 4 -

community, someone's going to have to explain to that community why
POSIX is important, and why they should pay more attention to it.
That's not a job for the folks you find in dot five meetings (mostly
language experts); it's a job for people who wear tailor-made suits;
who understand the history, the direction, and the importance of the
open systems effort; and who know industry politics.  And there are
members of dot zero who fit that description to a tee.

1003.1

Is dot one still doing anything, now that the ugly green book is in
print?  Absolutely.

First, it's moved into maintenance and bug-fix mode.  It's working on
a pair of extensions to dot 1 (A and B), on re-formatting the ugly
green book to make the ISO happy, and on figuring out how to make the
existing standard language-independent.  (The developer, he works from
sun to sun, but the maintainer's work is never done.) Second, it's
advising other groups and helping arbitrate their disputes.  An
example is the recent flap over transparent file access, in which the
group defining the standard (1003.8/1) was told, in no uncertain
terms, that NFS wouldn't do, because it wasn't consistent with dot one
semantics.  One wonders if things like the dot six chmod dispute will
finally be resolved here as well.

A key to success will be keeping enough of the original dot one
participants available and active to insure consistency.

1003.2

Dot one standardized the UNIX section two and three commands.  (Okay,
okay.  All together now: ``It's not UNIX, it's POSIX.  All resemblance
to any real operating system, living or dead, explicit or implied, is
purely coincidental.'') Dot two is making a standard for UNIX section
one commands.  Sort of.

The dot two draft currently in ballot, ``dot-two classic,'' is
intended to standardize commands that you'd find in shell scripts.
Unfortunately, if you look at dot-two classic you'll see things
missing.  In fact, you could have a strictly conforming system that
would be awfully hard to to develop software on or port software to.
To solve this, NIST pressured dot two into drawing up a standard for a
user portability extension (UPE).  The distinction is supposed to be
that dot-two classic standardizes commands necessary for shell script
portability, while the UPE standardizes things that are primarily
interactive, but aid user portability.

The two documents have some strategic problems.

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 5 -

   o+ Many folks who developed dot-two classic say the UPE is outside
     of dot two's charter, and won't participate in the effort.  This
     sort of behavior unquestionably harms the UPE.  Since I predict
     that the outside world will make no distinction between the UPE
     and the rest of the standard, it will actually harm the entire
     dot-two effort.

   o+ The classification criteria are unconvincing.  Nm(1) is in the
     UPE.  Is it really primarily used interactively?

   o+ Cc has been renamed c89, and lint may become lint89.  This is
     silly and annoying, but look on the bright side: at least we can
     see why c89 wasn't put in the UPE.  Had it been, it would have
     had to have a name users expected.

   o+ Who died and left NIST in charge?  POSIX seems constantly to be
     doing things that it didn't really want to do because it was
     afraid that if it didn't, NIST would strike out on its own.
     Others instances are the accelerated timetables of .1 and .2, and
     the creation of 1003.7 and 1201.)

   o+ Crucial pieces of software are missing from dot two.  The largest
     crevasse is the lack of any form of source-code control.  People
     on the committee don't want to suffer through an SCCS-RCS debate.
     POSIX dealt with the cpio-tar debate.  (It decided not to
     decide.) POSIX dealt with the vi-emacs debate.  (The UPE provides
     a standard for ex/vi.) POSIX is working on the NFS-RFS debate,
     and a host of others.  Such resolutions are a part of its
     responsibility and authority.  POSIX is even working on the
     Motif-Open/Look debate (whether it should or not).

     At the very least, the standard could require some sort of source
     code control, with an option specifying which flavor is
     available.  Perhaps we could ask NIST to threaten to provide a
     specification.

As a final note, because dot two (collective) standardizes user-level
commands, it really can provide practical portability across operating
systems.  Shell scripts written on a dot-two-conforming UNIX system
should run just fine on an MS-DOS system under the MKS toolkit.

1003.3

Dot three is writing test assertions for standards.  This means dot
three is doing the most boring work in the POSIX arena.  Oh, shoot,
that just slipped out.  But what's amazing is that the committee
members don't see it as boring.  In fact, Roger Martin, who, as senior
representative of the NIST, is surely one of the single most
influential people in the POSIX effort, actually chairs this
committee.  Maybe they know something I don't.

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 6 -

Dot three is balloting dot one assertions and working on dot two.  The
process is moving at standards-committee speed, but has the advantage
of having prior testing art as a touchstone (existing MindCraft, IBM,
and NIST test work).  The dilemma confronting the group is what to do
about test work for other committees, which are proliferating like
lagomorphs.  Dot three is clearly outnumbered, and needs some
administrative cavalry to come to its rescue.  Unless it expands
drastically (probably in the form of little subcommittees and a
steering committee) or is allowed to delegate some of the
responsibility of generating test assertions to the committees
generating the standards, it will never finish.  (``Whew, okay, dot
five's done.  Does anyone want to volunteer to be a liaison with dot
thirty-seven?'')

1003.4

Dot four is caught in a trap fashioned by evolution.  It began as a
real-time committee.  Somehow, it's metamorphosed into a catch-all,
``operating-system extensions'' committee.  Several problems have
sprung from this.

   o+ Some of the early proposed extensions were probably right for
     real-time, but aren't general enough to be the right approach at
     the OS level.

   o+ Pieces of the dot-four document probably belong in the the dot
     one document instead of a separate document.  Presumeably, ISO
     will perform this merge down the road.  Should the IEEE follow
     suit?

   o+ Because the dot-four extensions aren't as firmly based in
     established UNIX practice as the functionality specified in dot
     one and two, debate over how to do things is more heated, and the
     likelihood that the eventual, official, standard solution will be
     an overly complex and messy compromise is far higher.  For
     example, there is a currently active dispute about something as
     fundamental as how threads and signals should interact.

Unfortunately, all this change has diverted attention from a problem
that has to be dealt with soon - how to guarantee consistency between
dot four and dot five, the Ada-language-binding group.  Tasks
semantics are specified by the Ada language definition.  In order to
get an Ada binding to dot four's standard (which someone will have to
do), dot four's threads will have to be consistent with the way dot
five uses tasks in their current working document.  With dot five's
low numbers, the only practical way to insure this seems to be to have
dot four aggressively track the work of dot five.

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 7 -

1003.5

Dot five is creating an Ada-language binding for POSIX.  What's
``Ada-language binding'' mean?  Just that an Ada programmer should be
able to get any functionality provided by 1003.1 from within an Ada
program.  (Right now, they're working on an Ada-language binding for
the dot one standard, but eventually, they'll also address other
interfaces, including those from dot four, dot six, and dot eight.)
They face at least two kinds of technical problems and one social one.

The first technical problems is finding some way to express everything
in 1003.1 in Ada.  That's not always easy, since the section two and
three commands standardized by dot one evolved in a C universe, and
the semantics of C are sometimes hard to express in Ada, and vice-
versa.  Examples are Ada's insistence on strong typing, which makes
things like ioctl() look pretty odd, and Ada's tasking semantics,
which require careful thinking about fork(), exec(), and kill().
Luckily, dot five is populated by people who are Ada-language wizards,
and seem to be able to solve these problems.  One interesting
difference between dot five and dot nine is that the FORTRAN group has
chosen to retain the organization of the original dot one document so
that their document can simply point into the ugly green book in many
cases, whereas dot five chose to re-organize wherever it seemed to
help the flow of their document.  It will be interesting to see which
decision ends up producing the most useful document.

The second technical problem is making the solutions look like Ada.
For more discussion of this, see the dot-nine (FORTRAN bindings)
summary.  Again, this is a problem for Ada wizards, and dot five can
handle it.

The social problem?  Interest in dot five's work, outside of their
committee, is low.  Ada is out-of-favor with most UNIX programmers.
(``Geez, 1201 is a mess.  Their stuff's gonna look as ugly as Ada.'')
Conversely, most of the Ada community's not interested in UNIX.
(``Huh?  Another `standard' operating environment?  How's it compare
to, say, PCTE?  No, never mind.  Just let me know every few years how
it's coming along.'') The group that has the hardest problem - welding
together two, well-developed, standardized, disparate universes - has
the least help.

Despite all of this, the standard looks like it's coming close to
ballot, which means people ought to start paying attention to it
before they have no choice.

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 8 -

1003.6

Most of the UNIX community would still feel more at home at a Rainbow
gathering than reading the DOD rainbow books.  The unfamiliar-buzzword
frequency at dot six (security) meetings is quite high.  If you can
get someone patient to explain some of the issues, though, they're
pretty interesting.  The technical problems they're solving each boil
down to thinking through how to graft very foreign ideas onto UNIX
without damaging it beyond recognition.  (The recent posting about
chmod and access control lists, in comp.std.unix by Ana Maria de
Alvare and Mike Ressler, is a wonderful, detailed example.)

Dot six's prominent, non-technical problem is just as interesting.
The government has made it clear that vendors who can supply a
``secure UNIX'' will make a lot of money.  No fools, major vendors
have begun been furiously working on implementations.  The push to
provide a POSIX security standard comes at a time when these vendors
are already quite far along in their efforts, but still some way from
releasing the products.  Dot six attendees from such corporations
can't say too much, because it will give away what they're doing
(remember, too, that this is security), but must, somehow insure that
the standard that emerges is compatible with their company's existing,
secret implementation.

1003.7

There is no single, standard body of practice for UNIX system
administration, the area dot seven is standardizing.  Rather than seek
a compromise, dot seven has decided to re-invent system administration
from scratch.  This was probably necessary simply because there isn't
enough existing practice to compromise on.  Currently, their intent is
to provide an object-oriented standard, with objects specified in
ASN.1 and administration of a multi-machine, networked system as a
target.  (This, incidentally, was the recommendation of a USENIX White
Paper on system administration by Susanne Smith and John Quarterman.)
The committee doesn't have a high proportion of full-time system
administrators, or a large body of experience in object-oriented
programming.  It's essentially doing research by committee.  Despite
this, general sentiment outside the committee seems to be that it has
chosen a reasonable approach, but that progress may be slow.

A big danger is that they'll end up with a fatally flawed solution:
lacking good, available implementations; distinct enough from existing
practices, where they exist, to hamper adoption; and with no clear-cut
advantage to be gained by replacing any ad-hoc, existing, solutions
except for standard adherence.  The standard could be widely ignored.

What might prevent that from happening?  Lots of implementations.
Object-oriented programming and C++ are fashionable (at the 1988,

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 9 -

Winter Usenix C++ conference, Andrew Koenig referred to C++ as a
``strongly hyped language''); networked, UNIX systems are ubiquitous
in the research community; and system administration has the feeling
of a user-level, solvable problem.  If dot seven (perhaps with the
help of dot zero) can publicize their work in the object-oriented
programming community, we can expect OOPSLA conferences and
comp.sources.unix to overflow with high-quality, practical, field-
tested, object-oriented, system administration packages that conform
to dot seven.

1003.8

There are two administrative problems facing dot eight, the network
services group.  Both stem directly from the nature of the subject.
There is not yet agreement on how to solve either one.

The first is its continued growth.  There is now serious talk of
making each subgroup a full-fledged POSIX committee.  Since there are
currently six groups (transparent file access, network IPC, remote
procedure call, OSI/MAP services, X.400 mail gateway, and directory
services), this would increase the number of POSIX committees by
nearly 50%, and make networking the single largest aspect of the
standards work.  This, of course, is because standards are beneficial
in operating systems, and single-machine applications, but
indispensible in networking.

The second is intergroup coordination.  Each of the subgroups is
specialized enough that most dot eight members only know what's going
on in their own subgroup.  But because the issues are networking
issues, it's important that someone knows enough about what each group
is doing to prevent duplication of effort or glaring omissions.  And
that's only a piece of the problem.  Topics like system administration
and security are hard enough on a single, stand-alone machine.  In a
networked world, they're even harder.  Someone needs to be doing the
system engineering required to insure that all these areas of overlap
are addressed, addressed exactly once, and completed in time frames
that don't leave any group hanging, awaiting another group's work.

The SEC will have to sort out how to solve these problems.  In the
meantime, it would certainly help if we had snitches for each subgroup
in dot eight.  Any volunteers for .8/[3-6]?

1003.9

Dot nine, which is providing FORTRAN bindings, is really fun to watch.
They're fairly un-structured, and consequently get things done at an
incredible clip.  They're also friendly; when someone new arrives,
they actually stop, smile, and provide introductions all around.  It
helps that there are only half-a-dozen attendees or so, as opposed to

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 10 -

the half-a-hundred you might see in some of the other groups.
Meetings have sort of a ``we're all in this together/defenders of the
Alamo'' atmosphere.

The group was formed after two separate companies independently
implemented FORTRAN bindings for dot one and presented them to the
UniForum technical committee on supercomputing.  None of this, ``Let's
consider forming a study group to generate a PAR to form a committee
to think about how we might do it,'' stuff.  This was rapid
prototyping at the standards level.

Except for the advantage of being able to build on prior art (the two
implementations), dot nine has the same basic problems that dot five
has.  What did the prior art get them?  The most interesting thing is
that a correct FORTRAN binding isn't the same as a good FORTRAN
binding.  Both groups began by creating a binding that paralleled the
original dot one standard fairly closely.  Complaints about the
occasional non-FORTRANness of the result have motivated the group to
try to re-design the bindings to seem ``normal'' to typical FORTRAN
programmers.  As a simple example, FORTRAN-77 would certainly allow
the declaration of a variable in common called ERRNO, to hold the
error return code.  Users, however, would find such name misleading;
integer variables, by default and by convention, begin with ``I''
through ``N.''

It is worth noting that dot nine is actually providing FORTRAN-77
bindings, and simply ignoring FORTRAN-8x.  (Who was it that said of
8x, ``Looks like a nice language.  Too bad it's not FORTRAN''?)
Currently, 1003 intends to move to a language-independent
specification by the time 8x is done, which, it is claimed, will ease
the task of creating 8x bindings.

On the surface, it seems logical and appealing that documents like
1003.1 be re-written as a language-independent standard, with a
separate C-language binding, analogous to those of dot five and dot
nine.  But is it really?

First, it fosters the illusion that POSIX is divorced from, and
unconstrained by its primary implementation language.  Should the
prohibition against nul characters in filenames be a base-standard
restriction or a C-binding restriction?

I've seen a dot five committee member argue that it's the former.
Looked at in isolation, this is almost sensible.  If Ada becomes the
only language anyone wants to run, yet the government still mandates
POSIX compliance, why should a POSIX implementation prohibit its
filenames from containing characters that aren't special to Ada?  At
the same time, every POSIX attendee outside of dot five seems repelled
by the idea of filenames that contain nuls.  (Quiz: Can you specify a
C-language program or shell script that will create a filename
containing a nul?)

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 11 -

Second, C provides an existing, precise, widely-known language in
which POSIX can be specified.  If peculiarities of C make implementing
some portions of a standard, specified in C, difficult in another
language, then there are four, clear solutions:

  1.  change the specification so that it's equally easy in C and in
      other languages,

  2.  change the specification so that it's difficult in every
      language,

  3.  change the specification so that it's easy in some other
      language but difficult in C

  4.  make the specification vague enough so that it can be done in
      incompatible (though equally easy) ways in each language.

Only the first option makes sense.  Making the specification
language-independent means either using an imprecise language, which
risks four, or picking some little-known specification language (like
VDL), which risks two and three.  Declaring C the specification
language does limit the useful lifetime of POSIX to the useful
lifetime of C, but if we don't think we'll come up with good
replacements for both in a few decades, we're facing worse problems
than language-dependent specifications.

Last, if you think the standards process is slow now, wait until the
IEEE tries to find committee volunteers who are fluent in both UNIX
and some language-independent specification language.  Not only will
the useful lifetime of POSIX remain wedded to the useful lifetime of
C, but both will expire before the language-independent version of dot
one is complete.

It would be nice if this push for language-independent POSIX would go
away quietly, but it won't.

1003.10

In July, at the San Jose meeting, John Caywood of Unisys caught me in
the halls and said, accusingly, ``I understand you're think
supercomputers don't need a batch facility.'' I didn't have the
foggiest idea what he was talking about, but it seemed like as good a
chance as any to get a tutorial on dot ten, the supercomputing group,
so I grabbed it. (Pretty aggressively helpful folks in this
supercomputing group.  If only someone in it could be persuaded to
file a snitch report.)

Here's the story:

     Articles about software engineering like to point out that

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 12 -

     approaches and tools have changed from those used twenty years
     ago; computers and computing resources are now much cheaper than
     programmers and their time, while twenty years ago the reverse
     was true.  These articles are written by people who've never used
     a Cray.  A typical supercomputer application might run on a $25M,
     non-byte-addressable, non-virtual-memory machine, require 100 to
     1000 Mbytes of memory, and run for 10 Ksecs.  Expected running
     time for jobs can be greater than the machine's mean-time-to-
     failure.  The same techniques that were common twenty years ago
     are still important on these machines, for the same reasons -
     we're working close to the limits of hardware art.

The card punches are gone, but users often still can't login to the
machines directly, and must submit jobs through workstation or
mainframe front ends.  Resources are severely limited, and access to
those resources need to be carefully controlled.  The two needs that
surface most often are checkpointing, and a batch facility.

Checkpointing lets you re-start a job in the middle.  If you've used
five hours of Cray time, and need to continue your run for another
hour but have temporarily run out of grant money, you don't want to
start again from scratch when the money appears.  If you've used six
months of real time running a virus-cracking program and the machine
goes down, you might be willing to lose a few hours, even days, of
work, but can't afford to lose everything.  Checkpointing is a hard
problem, without a generally agreed-upon solution.

A batch facility is easier to provide.  Both Convex and Cray currently
support NQS, a public-domain, network queueing system.  The product
has enough known problems that the group is re-working the facility,
but the basic model is well-understood, and the committee members,
both users and vendors, seem to want to adopt it.  The goal is
command-level and library-level support for batch queues that will
provide effective resource management for really big jobs.  Users will
be able to do things like submit a job to a large machine through a
wide-area network, specify the resources - memory, disk space, time,
tape drives, etc. - that the job will need to run to completion, and
then check back a week or two later to find out how far their job's
progressed in the queue.

The group is determined to make rapid progress, and to that end is
holding 6-7 meetings a year.  One other thing: the group is actually
doing an application profile, not a standards document.  For an
clarification of the distinction, see the discussion of dot eleven.

1003.11

Dot eleven has begun work on an application profile (AP) on
transaction processing (TP).  An AP is a set of pointers into the
POSIX Open System Environment (OSE).  For example, the TP AP might

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 13 -

say, ``For dot eleven conformance, you need to conform to dot one, dot
four, sections 2.3.7 and 2.3.8 of dot 6, 1003.8 except for /2, and
provide a batch facility as specified in the dot 10 AP.'' A group
doing an AP will also look for holes or vague areas in the existing
standards, as they relate to the application area, go point them out
to the appropriate committee, and possibly chip in to help the
committee solve them.  If they find a gap that really doesn't fall
into anyone else's area, they can write a PAR, requesting that the SEC
(1003's oversight committee) charter them to write a standard to cover
it.

Dot eleven is still in the early, crucial stage of trying to figure
out what it wants to do.  Because of fundamental differences in
philosophy of the members, the group seems to be thrashing a lot.
There is a clear division between folks who want to pick a specific
model of TP and write an AP to cover it, and folks who think a model
is a far-too-detailed place to start.  The latter group is small, but
not easily dismissed.

It will be interesting to see how dot eleven breaks this log jam, and
what the resolution is.  As an aside, many of the modelers are from
the X/OPEN and ISO TP groups, which are already pushing specific
models of their own; this suggests what kinds of models we're likely
to get if the modeling majority wins.

X3J11

A single individual, Russell Hansberry, is blocking the official
approval of the ANSI standard for C on procedural grounds.  At some
point, someone failed to comply with the letter of IEEE rules for
ballot resolution.  and Hansberry is using the irregularity to delay
adoption of the standard.

This has had an odd effect in the 1003 committees.  No one wants to
see something like this inflicted on his or her group, so folks are
being particularly careful to dot all i's and cross all t's.  I say
odd because it doesn't look as though Hansberry's objections will have
any effect whatsoever on either the standard, or its effectiveness.
Whether ANSI puts its stamp on it or not, every C compiler vendor is
implementing the standard, and every book (even K&R) is writing to it.
X3J11 has replaced one de-facto standard with another, even stronger
one.

1201.1

What's that you say, bunky?  You say you're Jewish or Moslem, and you
can look at Xwindows as long as you don't eat it?  Well then, you
won't care much for 1201.1, which is supposed to be ``User Interface:
Application Programming Interface,'' but is really ``How much will the

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 14 -

Motif majority have to compromise with the Open/Look minority before
force-feeding us a thick standard full of `Xm[A-Z]...' functions with
long names and longer argument lists?''

Were this group to change its name to ``Xwindows application
programming interface,'' you might not hear nearly as much grousing
from folks outside the working group.  As it is, the most positive
comments you hear are, ``Well, X is pretty awful, but I guess we're
stuck with it,'' and ``What could they do?  If POSIX hadn't undertaken
it, NIST would have.''

If 1201 is to continue to be called ``User Interface,'' these aren't
valid arguments for standardizing on X or toolkits derived from it.
In what sense are we stuck with X?  The number of X applications is
still small, and if X and its toolkits aren't right for the job, it
will stay small.  Graphics hardware will continue to race ahead,
someone smart will show us a better way to do graphics, and X will
become a backwater.  If they are right, some toolkit will become a
de-facto standard, the toolkit will mature, and the IEEE can write a
realistic standard based on it.

Moreover, if NIST wants to write a standard based on X, what's wrong
with that?  If they come up with something that's important in the
POSIX world, good for them.  ANSI or the IEEE can adopt it, the way
ANSI's finally getting around to adopting C.  If NIST fails, it's not
the IEEE's problem.

If 1201.1 ignores X and NIST, can it do anything?  Certainly.  The
real problem with the occasionally asked question, ``are standards
bad?'' is that it omits the first word: ``When.'' Asked properly, the
answer is, ``When they're at the wrong level.'' API's XVT is example
of a toolkit that sits above libraries like Motif or the Mac toolbox,
and provides programmers with much of the standard functionality
necessary to write useful applications on a wide variety of window
systems.  Even if XVT isn't the answer, it provides proof by example
that we can have a window-system-independent, application programming
interface for windowing systems.  1201.1 could provide a useful
standard at that level.  Will it?  Watch and see.

December 1989 Standards Update     USENIX Standards Watchdog Committee


Volume-Number: Volume 18, Number 10

gwyn@smoke.brl.mil (Doug Gwyn) (01/09/90)

From: Doug Gwyn <gwyn@smoke.brl.mil>


In article <500@longway.TIC.COM> std-unix@uunet.uu.net writes:
>From: Jeffrey S. Haemer <jsh@usenix.org>
>            An Update on UNIX* and C Standards Activities

I have several comments on these issues (and will try to refrain
from commenting on the ones I don't track closely).

>1003.1
>An example is the recent flap over transparent file access, in which the
>group defining the standard (1003.8/1) was told, in no uncertain terms,
>that NFS wouldn't do, because it wasn't consistent with dot one semantics.

This is an important point; 1003.1 did very much have in mind network
file systems, and we decided that the full semantics specified in 1003.1
were really required for the benefit of portable applications on UNIX
systems (or workalikes), which is what 1003 was originally all about.

Having run into problem after problem with the lack of full 1003.1
semantics in our NFS-supporting environment, I fully agree with the
decision that applications should be able to rely on "UNIX semantics"
and that NFS simply does not meet this criterion.  (There are other
network-transparent file system implementations that do; the design
of NFS was constrained by the desire to support MS/DOS and to be
"stateless", both of which run contrary to UNIX filesystem semantics.)

>One wonders if things like the dot six chmod dispute will finally be
>resolved here as well.

Fairly late in the drafting of Std 1003.1, consultation with NCSC and
other parties concerned with "UNIX security" led to a fundamental
change in the way that privileges were specified.  That's when the
notion of "appropriate privilege" and the acknowledgement of optional
"additional mechanisms" were added, deliberately left generally vague
so as to encompass any other specification that would be acceptable
to the 1003.1 people as not interfering unduly with the traditional
UNIX approach to file access permissions.

Upon reviewing the chmod spec in IEE Std 1003.1-1988, I see no reason
to think that it would interfere with addition of ACL or other similar
additional mechanisms, the rules for which would be included in the
implementation-defined "appropriate privileges".  Remember, the UNIX-
like access rules of 1003.1 apply only when there is no additional
mechanism (or the additional mechanism is satisfied).

>A key to success will be keeping enough of the original dot one
>participants available and active to insure consistency.

Good luck with this.  Personally, I couldn't afford to pay the dues
and limited my membership to 1003.2 once Std 1003.1-1988 was published.

>1003.2
>The dot two draft currently in ballot, ``dot-two classic,'' is
>intended to standardize commands that you'd find in shell scripts.
>Unfortunately, if you look at dot-two classic you'll see things
>missing.  In fact, you could have a strictly conforming system that
>would be awfully hard to to develop software on or port software to.

>From my point of view, 1003.2 unfortunately included TOO MUCH, not
too little, for portable application support.  (My views on the
proper set of commands and options were spelled out in a very early
1003.2 document.)

>To solve this, NIST pressured dot two into drawing up a standard for a
>user portability extension (UPE).  The distinction is supposed to be
>that dot-two classic standardizes commands necessary for shell script
>portability, while the UPE standardizes things that are primarily
>interactive, but aid user portability.

NIST apparently thinks that all the horrible existing tools they're
familiar with should be forced upon all environments.  I think this
does interactive users a DISservice.  For one thing, many interesting
architectures require different tools from the traditional ones, and
requiring the traditional ones merely makes it difficult or impossible
for better environments to be provided under contracts that require
conformance to the UPE.  (This probably includes most future U.S.
government procurements, which means most major vendor OSes.)

>The two documents have some strategic problems.
>   o+ Many folks who developed dot-two classic say the UPE is outside
>     of dot two's charter, and won't participate in the effort.  This
>     sort of behavior unquestionably harms the UPE.  Since I predict
>     that the outside world will make no distinction between the UPE
>     and the rest of the standard, it will actually harm the entire
>     dot-two effort.

But they're right.  The UPE effort should be STOPPED, immediately.
There IS no "right" way to standardize this area.

>   o+ The classification criteria are unconvincing.  Nm(1) is in the
>     UPE.  Is it really primarily used interactively?

"nm" is precisely the sort of thing that should NOT be standardized
at all, due to widely varying environmental needs in software
generation systems.  There have been numerous attempts to standardize
object module formats (which is similar to standardizing "nm" behavior),
and none of them have been successful over anywhere near the range of
systems that a 1003 standard should properly encompass.

>   o+ Cc has been renamed c89, and lint may become lint89.  This is
>     silly and annoying, but look on the bright side: at least we can
>     see why c89 wasn't put in the UPE.  Had it been, it would have
>     had to have a name users expected.

"cc" (and "nm") is not sufficiently useful to APPLICATIONS to merit
being in 1003.2 at all.  Certainly its options cannot be fully specified
due to the wide range of system-specific support needed in many
environments.  Thus, "cc options files" where options include just -c
-Iwherever -Dname=value and -o file and files includes -lwhatever is all
that has fully portable meaning.  Is there really any UNIX implementation
that doesn't provide these so that a standard is needed?  I think not.

>   o+ Who died and left NIST in charge?  POSIX seems constantly to be
>     doing things that it didn't really want to do because it was
>     afraid that if it didn't, NIST would strike out on its own.
>     Others instances are the accelerated timetables of .1 and .2, and
>     the creation of 1003.7 and 1201.)

The problem is, NIST prepares FIPS and there is essentially no stopping
them.  Because FIPS are binding on government procurements (unless
specific waivers are obtained), they have heavy economic impact on
vendors.  In the "good old days", NBS allowed the computing industry
to develop suitable standards and later blessed them with FIPS.  With
the change in political climate that occurred with the Reagan
administration, which was responsible for the name change from NBS to
NIST, NIST was given a more "proactive" role in the development of
technology.  Unfortunately they seem to think that forcing standards
advances the technology, whereas that would be true only under
favorable circumstances (which unsuitable standards do not promote).
(Actually I think that the whole idea of a government attempting to
promote technology is seriously in error, but that's another topic.)

I don't know how you can tone down NIST.  Perhaps if enough congressmen
receive enough complaints some pressure may be applied.

>   o+ Crucial pieces of software are missing from dot two.  The largest
>     crevasse is the lack of any form of source-code control.  People
>     on the committee don't want to suffer through an SCCS-RCS debate.
>     POSIX dealt with the cpio-tar debate.  (It decided not to
>     decide.) POSIX dealt with the vi-emacs debate.  (The UPE provides
>     a standard for ex/vi.) POSIX is working on the NFS-RFS debate,
>     and a host of others.  Such resolutions are a part of its
>     responsibility and authority.  POSIX is even working on the
>     Motif-Open/Look debate (whether it should or not).

The problem with all these is that there is not a "good enough"
solution in widespread existing practice.  This should tell the
parties involved that standardization in these areas is therefore
premature, since it would in effect "lock in" inferior technology.
However, marketing folks have jumped on the standardization
bandwagon and want standards even where they're inappropriate.
(This is especially apparent in the field of computer graphics.)

>     At the very least, the standard could require some sort of source
>     code control, with an option specifying which flavor is
>     available.  Perhaps we could ask NIST to threaten to provide a
>     specification.

Oh, ugh.  Such options are evil in a standard, because they force
developers to always allow for multiple ways of doing things, which is
more work than necessary.

You shouldn't even joke about using NIST to force premature decisions,
as that's been a real problem already and we don't need it to get worse.

>As a final note, because dot two (collective) standardizes user-level
>commands, it really can provide practical portability across operating
>systems.  Shell scripts written on a dot-two-conforming UNIX system
>should run just fine on an MS-DOS system under the MKS toolkit.

I hope that is not literally true.  1003 decided quite early that it
would not bend over backward to accommodate layered implementations.
For MS-DOS to be supported even at the 1003.2 level would seem to
require that the standard not permit shared file descriptors,
concurrent process scheduling, etc. in portable scripts.  That would
rule out exploitation of some of UNIX's strongest features!

>On the surface, it seems logical and appealing that documents like
>1003.1 be re-written as a language-independent standard, with a
>separate C-language binding, analogous to those of dot five and dot
>nine.  But is it really?

I don't think it is.  UNIX and C were developed together, and C was
certainly intended to be THE systems implementation language for UNIX.

>First, it fosters the illusion that POSIX is divorced from, and
>unconstrained by its primary implementation language.  Should the
>prohibition against nul characters in filenames be a base-standard
>restriction or a C-binding restriction?

The prohibition is required due to kernel implementation constraints
(due to UNIX being implemented in C and relying on C conventions for
such things as handling pathname strings).  Thus the prohibition is
required no matter what the application implementation language.

>It would be nice if this push for language-independent POSIX would go
>away quietly, but it won't.

As I understand it, it is mainly ISO that is forcing this, probably
originally due to Pascal folks feeling left out of the action.
Because many large U.S. vendors have a significant part of their
market in Europe, where conformance with ISO standards is an
important consideration, there is a lot of pressure to make the
U.S.-developed standards meet ISO requirements, to avoid having to
provide multiple versions of products.  I think this is unfortunate
but don't have any solution to offer.

>X3J11
>A single individual, Russell Hansberry, is blocking the official
>approval of the ANSI standard for C on procedural grounds.  At some
>point, someone failed to comply with the letter of IEEE rules for
>ballot resolution.  and Hansberry is using the irregularity to delay
>adoption of the standard.

This is misstated.  IEEE has nothing to do with X3J11 (other than
through the 1003.1/X3J11 acting liaison, at the moment yours truly).

Mr. Hansberry did appeal to X3 on both technical and procedural
grounds.  X3 reaffirmed the technical content of the proposed
standard and the procedural appeal was eventually voluntarily
withdrawn.  The ANSI Board of Standards Review recently approved
the standard prepared by X3J11.

The delay in ratification consisted of two parts:  First, a delay
caused by having to address an additional public-review letter
(Mr. Hansberry's) that had somehow been mislaid by X3; fortunately
the points in the letter that X3J11 agreed with had already been
addressed during previous public review resolution.  (Note that
X3J11 and X3 do NOT follow anything like the IEEE 1003.n ballot
resolution/consensus process.  I much prefer X3J11's approach.)
Thus through expeditious work by the editor (me again) and reviewers
of the formal X3J11 document responding to the issues raised by the
late letter, this part of the delay was held to merely a few weeks.
The second part of the delay was caused by the appeal process that
Mr. Hansberry initiated (quite within his rights, although nobody I
know of in X3J11 or X3 thought his appeal to be justified).  The
net effect was to delay ratification of the ANSI standard by
several months.

>This has had an odd effect in the 1003 committees.  No one wants to
>see something like this inflicted on his or her group, so folks are
>being particularly careful to dot all i's and cross all t's.  I say
>odd because it doesn't look as though Hansberry's objections will have
>any effect whatsoever on either the standard, or its effectiveness.
>Whether ANSI puts its stamp on it or not, every C compiler vendor is
>implementing the standard, and every book (even K&R) is writing to it.
>X3J11 has replaced one de-facto standard with another, even stronger
>one.

That's because all the technical work had been completed and the
appeal introduced merely procedural delays.  Thus there was a clear
specification that was practically certain to become ratified as the
official standard eventually, so there was little risk and considerable
gain in proceeding to implement conformance to it.

You should note that no amount of dotting i's and crossing t's
would have prevented the Hansberry appeal.  I'm not convinced that
even handling his letter during the second public review batch would
have forestalled the appeal, which so far as I can tell was motivated
primarily by his disappointment that X3J11 had not attempted to specify
facilities aimed specifically at real-time embedded system applications.
(Note that this sort of thing was not part of X3J11's charter.)

>1201
>someone smart will show us a better way to do graphics, and X will
>become a backwater.

Someone smart has already shown us better ways to do graphics.
(If you've been reading ACM TOG and the USENIX TJ, you should have
already seen some of these.)

There is no doubt a need for X standardization, but it makes no
sense to bundle it in with POSIX.

>If 1201.1 ignores X and NIST, can it do anything?  Certainly.  The
>real problem with the occasionally asked question, ``are standards
>bad?'' is that it omits the first word: ``When.'' Asked properly, the
>answer is, ``When they're at the wrong level.'' API's XVT is example
>of a toolkit that sits above libraries like Motif or the Mac toolbox,
>and provides programmers with much of the standard functionality
>necessary to write useful applications on a wide variety of window
>systems.  Even if XVT isn't the answer, it provides proof by example
>that we can have a window-system-independent, application programming
>interface for windowing systems.  1201.1 could provide a useful
>standard at that level.  Will it?  Watch and see.

This makes a good point.  Standards can be bad not only because of
being drawn up for the wrong conceptual level, but also when they
do not readily accommodate a variety of environments.  1003.1 was
fairly careful to at least consider pipes-as-streams, network file
systems, ACLs, and other potential enhancements to the POSIX-
specified environment as just that, enhancements to an environment
that was deliberately selected to support portability of applications.
If a standard includes a too-specific methodology, it actually will
adversely constrain application portability.

By the way, I could use more information about API's XVT.  How can
I obtain it?

Volume-Number: Volume 18, Number 13

jsq@usenix.org (John S. Quarterman) (01/09/90)

From: John S. Quarterman <jsq@usenix.org>

In article <500@longway.TIC.COM> Doug Gwyn <gwyn@smoke.brl.mil> writes

>>A key to success will be keeping enough of the original dot one
>>participants available and active to insure consistency.

>Good luck with this.  Personally, I couldn't afford to pay the dues
>and limited my membership to 1003.2 once Std 1003.1-1988 was published.

I will add the possibility of subsidising some IEEE group memberships
(mailing list subscriptions) to the annual standards proposal I'm writing
for the USENIX board meeting at the end of the month.

>>X3J11
>>A single individual, Russell Hansberry, is blocking the official
>>approval of the ANSI standard for C on procedural grounds.  At some
>>point, someone failed to comply with the letter of IEEE rules for
>>ballot resolution.  and Hansberry is using the irregularity to delay
>>adoption of the standard.

>This is misstated.  IEEE has nothing to do with X3J11 (other than
>through the 1003.1/X3J11 acting liaison, at the moment yours truly).

You're right.  As publisher, I should have caught that.
Thanks for the clarifications and updates on the situation.

>There is no doubt a need for X standardization, but it makes no
>sense to bundle it in with POSIX.

Technically, it isn't POSIX.  That's why it's IEEE 1201, not IEEE 1003.
However, since 1201 seems to always meet concurrently with 1003, I'm
not sure what practical difference there is.

>Someone smart has already shown us better ways to do graphics.
>(If you've been reading ACM TOG and the USENIX TJ, you should have
>already seen some of these.)

Could you be talked into posting a summary article on this?

>By the way, I could use more information about API's XVT.  How can
>I obtain it?

If no one else posts information on this, I will dig it up and do so.
There have been papers on XVT in the Brussels EUUG Conference Proceedings
(April 1989) and the Baltimore USENIX Conference Proceedings (June 1989).
Naturally, those seem to be the two volumes missing from my shelf.


As moderator of the newsgroup, I would ordinarily have waited a few
days before replying on the above subjects, so that other people would
have a chance first.  However, I'm leaving tomorrow morning for New Orleans,
so I thought it best to respond now.  Jeff Haemer is already there, so you
may not hear more from him this week.

I hope to hear more discussion on Jeff's and Doug's points.

Volume-Number: Volume 18, Number 14

kingdon@ai.mit.edu (Jim Kingdon) (01/11/90)

    Cc has been renamed c89, and lint may become lint89.

Lint isn't in 1003.2 draft 9 (at least it's not in the index).
The name 'c89' is harmless, provided all implementors are sensible
enough to provide a 'cc' as well (in many cases these will be
different--c89 has to be ANSI but cc might contain ANSI-prohibited
extensions).  


Volume-Number: Volume 18, Number 17

mbj@spice.cs.cmu.edu (Michael Jones) (05/02/90)

From: mbj@spice.cs.cmu.edu (Michael Jones)

> >       The threads subgroup (1003.4A) has attempted to kill the .4 ballot by
> >       a block vote for rejection.  One correspondent says they are doing
> >       this because .4 is no good without threads.
> 
> I'd like to hear an explanation of this assertion.  Certainly, for
> years we've been developing real-time applications without support
> for threads.  They seem like separable issues to me.

Since this came up again I suppose it warrants a reply.  I'd like to state as
an active member of .4a (which makes me an active member of .4 since the two
are one and the same working groups) that I perceive no attempt to kill .4.
Several detailed ballot objections were submitted of which mine was certainly
one.  My objections were motivated by areas of the .4 proposal which I felt
could be significantly improved and responsive suggestions were made.  I know
of others who felt similarly and balloted in kind.  But in no way did I
perceive any linkage between attempting to improve .4 and any alleged
inadequacy of .4 without threads.

Realtime support is good.  Threads are good.  They can be used together.
They can be used separately.  In my view those members of the working group
with realtime expertise have improved .4 and those with threads expertise
have improved .4a.  I perceive no conflict.

				-- Mike

Volume-Number: Volume 19, Number 90

peter@ficc.uu.net (Peter da Silva) (05/02/90)

From: peter@ficc.uu.net (Peter da Silva)

I've finally got a copy of P1003.4, and I find it to be quite nice. The
lack of threads is no big deal... threads should certainly be standardised,
but any threads design that can't be implemented on top of P1003.4 is
probably going to cause big problems for existing systems anyway.

One thing to consider is that threads and real-time are not equivalent
concepts. Threads are a nice technique for implementing real-time systems,
and most real-time systems make an implementation of threads pretty easy,
but there are non-real-time systems that implement lightweight processes for
reasons of improving throughput rather than reducing response time.

Keeping P1003.4 from prohibiting certain threaded implementations is one
thing, but it shouldn't require threads in any real-time system. And it
shouldn't require that you have to go to a real-time system to conform
to the threads standard.

Threads probably deserves a P1003 number of its own.

As for Berkeley's sore feelings because P1003.4 doesn't look like BSD, that's
just silly. It'd be like USG being upset because P1003.4 doesn't implement
the System-V IPC kludges. P1003.4 looks quite familiar to me, from working
with other real-time systems... including real-time-like UNIX. And it should
be implementable (as far as the functionality you need for real-time can be)
on top of sockets, without penalising real real time folks by sticking them
with a socket interface.
-- 
 _--_|\  `-_-' Peter da Silva. +1 713 274 5180.      <peter@ficc.uu.net>
/      \  'U`  Have you hugged your wolf today?  <peter@sugar.hackercorp.com>
\_.--._/       Disclaimer: commercial solicitation by email to this address
      v                    is acceptable.


Volume-Number: Volume 19, Number 99

decot@hpda.uucp (Dave Decot) (05/03/90)

From: decot@hpda.uucp (Dave Decot)

Jeff Haemer writes:

> >     Parenthetically, I'll admit to being mystified by the dim view some
> >     folks take of the UPE.  I actually put programmer portability above
> >     program portability, since, when I go looking for new jobs I can't
> >     take our software with me, but do want to be sure that I can still use
> >     vi.

Doug Gwyn responds:

> It seems most unlikely to me that you have the option of specifying
> IEEE 1003.2 conformance when you interview with a prospective employer.
> I believe that the main point of these standards is to attain improved
> portability for applications.
> 
> Besides, why should I have to support both "vi" and "emacs" on my systems
> when we're all using "sam" instead?  It gains me NOTHING (because imported
> software is not going to require these interactive facilities) and costs
> me a bunch.

I suggest that you learn the scope and purpose of the UPE (now known
as the User Portability Utilities Option, or POSIX.2a).  It has a
different focus than the base POSIX.2 specification, and is based
upon a refutation of what you assert above.

One of the primary motivations for POSIX.2a is the desire to have a
standard set of utilities that a user can learn once, and thereafter
be a "portable user" of those utilities.

Prospective employers can already ask employees whether they "know MSWord,
Lotus, and MacPaint", because those are industry-standard utilities.
The same treatment should be available for traditional UNIX tools, but
since there are different vendors of these, a common specification
is necessary.

Having attended the POSIX.2 committee meetings for quite a long time,
I quite concur with Hal Jespersen's representation of the SCCS/RCS issues
and the contents of POSIX.2b and .2c.

Dave Decot, HP
DISCLAIMER: This message represents only my views.

Volume-Number: Volume 19, Number 106

std-unix@longway.TIC.COM (Moderator, John S. Quarterman) (05/07/90)

From: Doug Gwyn <uunet!smoke.brl.mil!gwyn>

In article <671@longway.TIC.COM> From: decot@hpda.uucp (Dave Decot)
>One of the primary motivations for POSIX.2a is the desire to have a
>standard set of utilities that a user can learn once, and thereafter
>be a "portable user" of those utilities.

Utilities designed for END USERS, as opposed to those designed for
programmers, should be such that they are very easy to learn.

>Prospective employers can already ask employees whether they "know MSWord,
>Lotus, and MacPaint", because those are industry-standard utilities.

Apart from MacPaint, they don't have well-designed user interfaces either.
Most Mac software can be immediately used with NO TRAINING by almost
anyone at all familiar with general characteristics of that environment.
Trying to standardize details of specific applications within an easy-to-use
environment would seem pretty much a waste of time.  Conversely, trying to
standardize details of a hard-to-use interface would also seem to be a waste
of time, since people who would most benefit from that would benefit even
more from having a decent user interface instead!

Volume-Number: Volume 19, Number 109

jsh@usenix.org (06/30/90)

From:  <jsh@usenix.org>


           An Update on UNIX*-Related Standards Activities

                              June, 1990

                 USENIX Standards Watchdog Committee

                   Jeffrey S. Haemer, Report Editor

USENIX Standards Watchdog Committee

Jeffrey S. Haemer <jsh@ico.isc.com> reports on spring-quarter
standards activities

What these reports are about

Reports are done quarterly, for the USENIX Association, by volunteers
from the individual standards committees.  The volunteers are
familiarly known as snitches and the reports as snitch reports.  The
band of snitches and I make up the working committee of the USENIX
Standards Watchdog Committee.  Our job is to let you know about things
going on in the standards arena that might affect your professional
life -- either now or down the road a ways.

We don't yet have active snitches for all the committees and sometimes
have to beat the bushes for new snitches when old ones retire or can't
make a meeting, but the number of groups with active snitches
continues to grow (as, unfortunately, does the number of groups).

We know we currently need snitches in 1003.6 (Security), 1003.11
(Transaction Processing), 1003.13 (Real-time Profile), and nearly all
of the 1200-series POSIX groups, There are probably X3 groups the
USENIX members would like to know about that we don't even know to
look for watchdogs in.  If you're active in any other standards-
related activity that you think you'd like to report on, please drop
me a line.  Andrew Hume's fine report on X3B11.1 is an example of the
kind of submission I'd love to see.

If you have comments or suggestions, or are interested in snitching
for any group, please contact me (jsh@usenix.org) or John
(jsq@usenix.org).  If some of the reports make you interested enough
or indignant enough to want to go to a POSIX meeting, or you just want
to talk to me in person, join me at the next set, July 16-20, at the
Sheraton Tara, in Danvers, Massachusetts, just outside of Boston.

The USENIX Standards Watchdog Committee also has both a financial
committee -- Ellie Young, Alan G. Nemeth, and Kirk McKusick (chair);

__________

  * UNIX is a registered trademark of AT&T in the U.S. and other
    countries.

June, 1990 Standards Update        USENIX Standards Watchdog Committee


				- 2 -

and a policy committee -- the financial committee plus John S.
Quarterman (chair).

An official statement from John:

     The basic USENIX policy regarding standards is:
          to attempt to prevent standards from prohibiting innovation.
     To do that, we

        + Collect and publish contextual and technical information
          such as the snitch reports that otherwise would be lost in
          committee minutes or rationale appendices or would not be
          written down at all.

        + Encourage appropriate people to get involved in the
          standards process.

        + Hold forums such as Birds of a Feather (BOF) meetings at
          conferences.  We sponsored one workshop on standards.  And
          are cosponsoring another in conjunction with IEEE, UniForum,
          and EUUG.  (Co-chairs are Shane P. McCarron
          <ahby@uiunix.org> and Fritz Schulz <fritz@osf.osf.org>.
          Contact them for details.)

        + Write and present proposals to standards bodies in specific
          areas.

        + Occasionally sponsor White Papers in particularly
          problematical areas, such as IEEE 1003.7 (in 1989).

        + Very occasionally lobby organizations that oversee standards
          bodies regarding new committee, documents, or balloting
          procedures.

        + Starting in mid-1989, USENIX and EUUG (the European UNIX
          systems Users Group) began sponsoring a joint representative
          to the ISO/IEC JTC1 SC22 WG15 (ISO POSIX) standards
          committee.

     There are some things we do not do:

        + Form standards committees.  It's the USENIX Standards
          Watchdog Committee, not the POSIX Watchdog Committee, not
          part of POSIX, and not limited to POSIX.

        + Promote standards.

        + Endorse standards.

June, 1990 Standards Update        USENIX Standards Watchdog Committee


				- 3 -

     Occasionally we may ask snitches to present proposals or argue
     positions on behalf of USENIX.  They are not required to do so
     and cannot do so unless asked by the USENIX Standards Watchdog
     Policy Committee.

     Snitches mostly report.  We also encourage them to recommend
     actions for USENIX to take.

          John S. Quarterman, USENIX Standards Liaison

June, 1990 Standards Update        USENIX Standards Watchdog Committee

Volume-Number: Volume 20, Number 65

jsh@usenix.org (Jeffrey S. Haemer,) (10/12/90)

Submitted-by: jsh@usenix.org (Jeffrey S. Haemer,)


		  An Update on UNIX1-Related Standards Activities

				  October 11, 1990

			USENIX Standards Watchdog Committee

		 Jeffrey S. Haemer, jsh@ico.isc.com, Report Editor

       USENIX Standards	Watchdog Committee

       Jeffrey S. Haemer <jsh@ico.isc.com> reports on summer-quarter stan-
       dards activities

       What_these_reports_are_about

       Reports are done	quarterly, for the USENIX Association, by volunteers
       from the	individual standards committees.  The volunteers are fami-
       liarly known as snitches	and the	reports	as snitch reports.  The	band
       of snitches, John Quarterman, and I make	up the working committee of
       the USENIX Standards Watchdog Committee.	 Our job is to let you know
       about things going on in	the standards arena that might affect your
       professional life -- either now or down the road	a ways.

       We don't	yet have active	snitches for all the committees	and sometimes
       have to beat the	bushes for new snitches	when old ones retire or	can't
       make a meeting, but the number of groups	with active snitches contin-
       ues to grow (as,	unfortunately, does the	number of groups).

       If you're active	in any standards-related activity that you think
       you'd like to report on,	please drop me a line.	We know	we currently
       need snitches in	several	1003 groups, and nearly	all of the 1200-
       series groups.  We currently have snitches in X3J16 (C++) and X3B11
       (WORM file systems), but	there are probably X3 groups the USENIX
       members would like to know about	that we	don't even know	to look	for
       watchdogs in.  I	also take reports from other standards activities.
       This quarter, you've seen reports from the WG-15	TAG (the U.S.'s
       effort in the ISO POSIX arena), from the	NIST Shell-and-Tools FIPS
       meeting,	and from the USENIX Standards BOF.

       If you have comments or suggestions, or are interested in snitching
       for any group, please contact me	(jsh@usenix.org) or John
       (jsq@usenix.org).  If some of the reports make you interested enough
       or indignant enough to want to go to a POSIX meeting, or	you just want
       to talk to me in	person,	join me	at the next set, October 15-19 at the
       Westin Hotel in Seattle,	Washington.

       __________

	1. UNIXTM is a Registered Trademark of UNIX System Laboratories	in
	   the United States and other countries.

       October 11, 1990	Standards Update  USENIX Standards Watchdog Committee


				       - 2 -

       The USENIX Standards Watchdog Committee also has	both a financial
       committee -- Ellie Young, Alan G. Nemeth, and Kirk McKusick (chair);
       and a policy committee -- the financial committee plus John S. Quar-
       terman (chair).

       An official statement from John:2

	    The	basic USENIX policy regarding standards	is:
		 to attempt to prevent standards from prohibiting innovation.
	    To do that,	we

	       o Collect and publish contextual	and technical information
		 such as the snitch reports that otherwise would be lost in
		 committee minutes or rationale	appendices or would not	be
		 written down at all.

	       o Encourage appropriate people to get involved in the stan-
		 dards process.

	       o Hold forums such as Birds of a	Feather	(BOF) meetings at
		 conferences, and standards workshops.

	       o Write and present proposals to	standards bodies in specific
		 areas.

	       o Occasionally sponsor other standards-related activities,
		 including as White Papers in particularly problematical
		 areas,	such as	IEEE 1003.7, and contests, such	as the
		 current Weirdnix contest.

	       o Very occasionally lobby organizations that oversee standards
		 bodies	regarding new committee, documents, or balloting pro-
		 cedures.

	       o Sponsor a representative to the ISO/IEC JTC1 SC22 WG15	(ISO
		 POSIX)	standards committee, jointly with EUUG (the European
		 UNIX systems Users Group).

	    There are some things we do	not do:

       __________

	2. All that follows is currently true, but may change in the near
	   future because of recent USENIX financial problems.	See John's
	   October 2, 1990, comp.std.unix posting on USENIX Standards Funding
	   Decisions for details.

       October 11, 1990	Standards Update  USENIX Standards Watchdog Committee


				       - 3 -

	       o Form standards	committees.  It's the USENIX Standards Watch-
		 dog Committee,	not the	POSIX Watchdog Committee, not part of
		 POSIX,	and not	limited	to POSIX.

	       o Promote standards.

	       o Endorse standards.

	    Occasionally we may	ask snitches to	present	proposals or argue
	    positions on behalf	of USENIX.  They are not required to do	so
	    and	cannot do so unless asked by the USENIX	Standards Watchdog
	    Policy Committee.

	    Snitches mostly report.  We	also encourage them to recommend
	    actions for	USENIX to take.

		 John S. Quarterman, USENIX Standards Liaison

       October 11, 1990	Standards Update  USENIX Standards Watchdog Committee



Volume-Number: Volume 21, Number 199