jsh@usenix.org (06/30/90)
From: <jsh@usenix.org>
An Update on UNIX*-Related Standards Activities
June, 1990
USENIX Standards Watchdog Committee
Jeffrey S. Haemer, jsh@ico.isc.com, Report Editor
Recent Standards Activities
This editorial is an overview of some of the spring-quarter standards
activities covered by the USENIX Standards Watchdog Committee. A
companion article provides a general overview of the committee itself.
In this article, I've emphasized non-technical issues, which are
unlikely to appear in official minutes and mailings of the standards
committees. Previously published articles give more detailed, more
technical views on most of these groups' activities. If my comments
move you to read one of those earlier reports that you wouldn't have
read otherwise, I've served my purpose. Of course, on reading that
report you may discover the watchdog's opinion differs completely from
mine.
SEC: Standard/Sponsor Executive Committee
The biggest hullabaloo in the POSIX world this quarter came out of the
SEC, the group that approves creation of new committees. At the April
meeting, in a move to slow the uncontrolled proliferation of POSIX
standards, the institutional representatives (IRs) (one each from
Usenix, UniForum, X/Open, OSF, and UI) recommended two changes in the
Project Authorization Request (PAR) approval process: (1) firm
criteria for PAR approval and group persistence and (2) a PAR-approval
group that had no working-group chairs or co-chairs. Dale Harris, of
IBM Austin, presented the proposal and immediately took a lot of heat
from the attendees, most of whom are working-group chairs and co-
chairs (Dale isn't an IR, but shared the concerns that motivated the
recommendations and asked to make the presentation.)
The chair, Jim Isaak, created an ad-hoc committee to talk over the
proposal in a less emotional atmosphere. Consensus when the committee
met was that the problem of proliferating PARs was real, and the only
question was how to fix it. The group put together a formal set of
criteria for PAR approval (which John Quarterman has posted to
comp.std.unix), which seems to have satisfied everyone on the SEC, and
passed without issue. The criteria seem to have teeth: at least one
of the Project Authorization Requests presented later (1201.3, UIMS)
__________
* UNIX is a Registered Trademark of UNIX System Laboratories in the
United States and other countries.
June, 1990 Standards Update Recent Standards Activities
- 2 -
flunked the criteria and was rejected. Two others (1201.1 and 1201.4
toolkits and Xlib) were deferred. I suspect (though doubt that any
would admit it) that the proposals would have been submitted and
passed in the absence of the criteria. In another related up-note,
Tim Baker and Jim Isaak drafted a letter to one group (1224, X.400
API), warning them that they must either prove they're working or
dissolve.
The second of the two suggestions, the creation of a PAR-approval
subcommittee, sank quietly. The issue will stay submerged so long as
it looks like the SEC is actually using the approved criteria to fix
the problem. [ Actually, this may not be true. Watch for developments
at the next meeting, in Danvers, MA in mid-July. -jsq]
Shane McCarron's column in the July Unix Review covers this area in
more detail.
1003.0: POSIX Guide
Those of you who have read my last two columns will know that I've
taken the position that dot zero is valuable, even if it doesn't get a
lot of measurable work done. This time, I have to say it looks like
it's also making measurable progress, and may go to mock ballot by its
target of fourth quarter of this year. To me, the most interesting
dot-zero-related items this quarter are the growing prominence of
profiles, and the mention of dot zero's work in the PAR-approval-
criteria passed by the SEC.
Al Hankinson, the chair, tells me that he thinks dot zero's biggest
contribution has been popularizing profiles -- basically,
application-area-specific lists of pointers to other standards. This
organizing principle has been adopted not only by the SEC (several of
the POSIX groups are writing profiles), but by NIST (Al's from NIST)
and ISO. I suspect a lot of other important organizations will fall
in line here.
Nestled among the other criteria for PAR approval, is a requirement
that PAR proposers write a sample description of their group for the
POSIX guide. Someone questioned why proposers should have to do dot
zero's job for them. The explanation comes in two pieces. First, dot
zero doesn't have the resources to be an expert on everything, it has
its hands full just trying to create an overall architecture. Second,
the proposers aren't supplying what will ultimately go into the POSIX
guide, they're supplying a sample. The act of drafting that sample
will force each proposer to think hard about where the new group would
fit in the grand scheme, right from the start. This should help
insure that the guide's architecture really does reflect the rest of
the POSIX effort, and will increase the interest of the other groups
in the details of the guide.
June, 1990 Standards Update Recent Standards Activities
- 3 -
1003.1: System services interface
Dot one, the only group that has completed a standard, is in the
throes of completing a second. Not only has the IEEE updated the
existing standard -- the new version will be IEEE 1003.1-1990 -- ISO
appears on the verge of approving the new version as IS 9945-1. The
major sticking points currently seem limited to things like format and
layout -- important in the bureaucratic world of international
standards, but inconsequential to the average user. Speaking of
layout, one wonders whether the new edition and ISO versions will
retain the yellow-green cover that has given the current document its
common name -- the ugly green book. (I've thought about soaking mine
in Aqua Velva so it can smell like Green Chartreuse, too.)
The interesting issues in the group are raised by the dot-one-b work,
which adds new functionality. (Read Paul Rabin's snitch report for
the gory details.) The thorniest problem is the messaging work.
Messaging, here, means a mechanism for access to external text and is
unrelated to msgget(), msgop(), msgctl(), or any other message-passing
schemes. The problem being addressed is how to move all printable
strings out of our programs and into external ``message'' files so
that we can change program output from, say, English to German by
changing an environmental variable. Other dot-one-b topics, like
symbolic links, are interesting, but less pervasive. This one will
change the way you write any commercial product that outputs text --
anything that has printf() statements.
The group is in a quandary. X/Open has a scheme that has gotten a
little use. We're not talking three or four years of shake-out, here,
but enough use to lay a claim to the ``existing practice'' label. On
the other hand, it isn't a very pleasant scheme, and you'd have no
problem coming up with candidate alternatives. The UniForum
Internationalization Technical Committee presented one at the April
meeting. It's rumored that X/Open itself may replace its current
scheme with another. So, what to do? Changing to a new scheme
ignores existing internationalized applications and codifies an
untried approach. Blessing the current X/Open scheme freezes
evolution at this early stage and kills any motivation to develop an
easy-to-use alternative. Not providing any standard makes
internationalized applications (in a couple of years this will mean
any non-throw-away program) non-portable, and requires that we
continue to have to make heavy source-code modifications on every
port -- just what POSIX is supposed to help us get around.
To help you think about the problem, here's the way you'll have to
write the "hello, world" koan using the X/OPEN interfaces:
June, 1990 Standards Update Recent Standards Activities
- 4 -
#include <stdio.h>
#include <nl_types.h>
#include <locale.h>
main()
{
nl_catd catd;
(void)setlocale(LC_ALL, "");
catd = catopen("hello", 0); /* error checking omitted for brevity */
printf(catgets(catd, 1, 1,"hello, world\n"));
}
and using the alternative, proposed UniForum interfaces:
#include <stdio.h>
#include <locale.h>
main()
{
(void)setlocale(LC_ALL, "");
(void)textdomain("hello");
printf(gettext("hello, world\n"));
}
I suppose if I had my druthers, I'd like to see a standard interface
that goes even farther than the UniForum proposal: one that adds a
default message catalogue/group (perhaps based on the name of the
program) and a standard, printf-family messaging function to hide the
explicit gettext() call, so the program could look like this:
#include <stdio.h>
#include <locale.h>
#define printf printmsg
main()
{
(void)setlocale(LC_ALL, ""); /* inescapable, required by ANSI C */
printf("hello, world\n");
}
but that would still be untested innovation.
The weather conditions in Colorado have made this a bonus year for
moths. Every morning, our bathroom has about forty moths in it.
Stuck in our house, wanting desperately to get out, they fly toward
the only light that they can see and beat themselves to death on the
bathroom window. I don't know what to tell them, either.
June, 1990 Standards Update Recent Standards Activities
- 5 -
1003.2: Shell and utilities
Someone surprised me at the April meeting by asserting that 1003.2
might be an important next target for the FORTRAN binding group.
(``What does that mean?'' I asked stupidly. ``A standard for a
FORTRAN-shell?'') Perhaps you, like I, just think of dot two as
language-independent utilities. Yes and no.
First, 1003.2 has over a dozen function calls (e.g., getopt()). I
believe that most of these should be moved into 1003.1. System() and
popen(), which assume a shell, might be exceptions, but having
sections of standards documents point at things outside their scope is
not without precedent. Section 8 of P1003.1-1988 is a section of C-
language extensions, and P1003.5 will depend on the Ada standard. Why
shouldn't an optional section of dot one depend on dot two? Perhaps
ISO, already committed to re-grouping and re-numbering the standards,
will fix this. Perhaps not. In the meantime, there are functions in
dot two that need FORTRAN and Ada bindings.
Second, the current dot two standard specifies a C compiler. Dot nine
has already helped dot two name the FORTRAN compiler, and may want to
help dot two add a FORTRAN equivalent of lint (which I've heard called
``flint''). Dot five may want to provide analogous sorts of help
(though Ada compilers probably already subsume much of lint's
functionality).
Third, more subtle issues arise in providing a portable utilities
environment for programmers in other languages. Numerical libraries,
like IMSL, are often kept as single, large source files with hundreds,
or even thousands, of routines in a single .f file that compiles into
a single .o file. Traditional FORTRAN environments provide tools that
allow updating or extraction of single subroutines or functions from
such objects, analogous to the way ar can add or replace single
objects in libraries. Dot nine may want to provide such a facility in
a FORTRAN binding to dot two.
Anyway, back to the working group. They're preparing to go to ballot
on the UPE (1003.2a, User Portability Extensions). The mock ballot
had pretty minimal return, with only ten balloters providing
approximately 500 objections. Ten isn't very many, but mock ballot
for dot two classic only had twenty-three. It seems that people won't
vote until they're forced to.
The collection of utilities in 1003.2a is fairly reasonable, with only
a few diversions from historic practice. A big exception is ps(1),
where historic practice is so heterogeneous that a complete redesign
is possible. Unfortunately, no strong logical thread links the
1003.2a commands together, so read the ballot with an eye toward
commands that should be added or discarded.
June, 1990 Standards Update Recent Standards Activities
- 6 -
A few utilities have already disappeared since the last draft. Pshar,
an implementation of shar with a lot of bells and whistles, is gone.
Compress/uncompress poses an interesting problem. Though the utility
is based on clear-cut existing practice, the existing implementation
uses an algorithm that is copyrighted. Unless the author chooses to
give the algorithm away (as Ritchie dedicated his set-uid patent to
public use), the committee is faced with a hard choice:
- They can specify only the user interface. But the purpose of
these utilities is to ease the cost of file interchange. What
good are they without a standard data-interchange format?
- They can invent a new algorithm. Does it make sense to use
something that isn't field-tested or consistent with the versions
already out there? (One assumes that the existing version has
real advantages, otherwise, why would so many people use a
copyrighted version?)
Expect both the first real ballot of 1003.2a and recirculation of
1003.2 around July. Note that the recirculation will only let you
object to items changed since the last draft, for all the usual bad
reasons.
1003.3: Test methods
The first part of dot three's work is coming to real closure. The
last ballot failed, but my guess is that one will pass soon, perhaps
as soon as the end of the year, and we will have a standard for
testing conformance to IEEE 1003.1-1988.
That isn't to say that all is rosy in dot-one testing. NIST's POSIX
Conformance Test Suite (PCTS) still has plenty of problems:
misinterpretations of dot one, simple timing test problems that cause
tests to run well on 3b2's, but produce bad results on a 30 mips
machine and even real bugs (attempts to read from a tty without first
opening it). POSIX dot one is far more complex than anything for
which standard test suites have been developed to date. The PCTS,
with around 2600 tests and 150,000 lines of code, just reflects that
complexity. An update will be sent to the National Technical
Information Service (NTIS -- also part of the Department Commerce, but
not to be confused with NIST) around the end of September which fixes
all known problems but, with a suite this large, others are likely to
surface later.
By the way, NIST's dot one suite is a driver based on the System V
Verification Suite (SVVS), plus individual tests developed at NIST.
Work has begun on a suite of tests for 1003.2, based, for convenience,
on a suite done originally for IBM by Mindcraft. It isn't clear how
quickly this work will go. (For example, the suite can't gel until
dot two does.) For the dot one work, NIST made good use of Research
June, 1990 Standards Update Recent Standards Activities
- 7 -
Associates -- people whose services were donated by their corporations
during the test suite development. Corporations gain an opportunity
to collaborate with NIST and inside knowledge of the test suite. I
suspect Roger Martin may now be seeking Research Associates for dot
two test suite development. If you're interested in doing this kind
of work, want to spend some time working in the Washington, D.C. area,
and think your company would sponsor you, his email address is
rmartin@swe.ncsl.nist.gov.
By the way, there are a variety of organizational and numbering
changes happening in dot three. See Doris Lebovits's snitch report
for details.
The Steering Committee on Conformance Testing (SCCT) is the group to
watch. Though they've evolved out of the dot three effort, they
operate at the TCOS level, and are about to change the way POSIX
standards look. In response to the ever-increasing burden placed on
the testing committee, the SCCT is going to recommend that groups
producing new standards include in those standards a list of test
assertions to be used in testing them.
Groups that are almost done, like 1003.2, will be grandfathered in.
But what should be done with a group like dot four -- not far enough
along that it has something likely to pass soon, but far enough to
make the addition of major components to its ballot a real problem.
Should this case be treated like language independence? If so,
perhaps dot four will also be first in providing test assertions.
1003.4: Real-time extensions
The base dot-four document has gone to ballot, and the ensuing process
looks like it may be pretty bloody. Fifty-seven percent of the group
voted against the current version. (One member speculated privately
that this meant forty-three percent of the balloting group didn't read
it.) Twenty-two percent of the group (nearly half of those voting
against) subscribed to all or part of a common reference ballot, which
would require that entire chapters of the document be completely
reworked, replaced, or discarded. Subscribers to this common
reference ballot included employees of Unix International and the Open
Software Foundation, of Carnegie-Mellon University and the University
of California at Berkeley, and of Sun Microsystems and Hewlett-
Packard. (USENIX did not ballot similarly, but only because of lack
of time.) Some of these organizations have never before agreed on the
day of the week, let alone the semantics of system calls. But then,
isn't bringing the industry together one goal of POSIX?
Still, the document has not been returned to the working group by the
technical editors, so we can assume they feel hopeful about resolving
all the objections. Some of this hope may come from the miracle of
formality. I've heard that over half of the common reference ballot
could be declared non-responsive, which means that there's no
June, 1990 Standards Update Recent Standards Activities
- 8 -
obligation to address over half the concerns.
The threads work appears to enjoy a more positive consensus. At least
two interesting alternatives to the current proposal surfaced at the
April meeting, but following a lot of discussion, the existing
proposal stood largely unchanged. I predict that the threads work
which will go to ballot after the base, dot four document, will be
approved before it. John Gertwagen, dot four snitch and chair of
UniForum's real-time technical committee, has bet me a beer that I'm
wrong.
1003.5: Ada bindings and 1003.9: FORTRAN-77 bindings
These groups are coming to the same place at the same time. Both are
going to ballot and seem likely to pass quickly. In each case, the
major focus is shifting from technical issues to the standards process
and its rules: forming balloting groups, relations with ISO, future
directions, and so on.
Here's your chance to do a good deed without much work. Stop reading,
call someone you know who would be interested in these standards, and
give them the name of someone on the committee who can put them into
the balloting group. (If nothing else, point them at our snitches for
this quarter: Jayne Baker cgb@d74sun.mitre.org, for dot five, and
Michael Hannah mjhanna@sandia.gov, for dot nine.) They'll get both a
chance to see the standard that's about to land on top of their work
and a chance to object to anything that's slipped into the standard
that doesn't make sense. The more the merrier on this one, and they
don't have to go to any committee meetings. I've already called a
couple of friends of mine at FORTRAN-oriented companies; both were
pleased to hear about 1003.9, and eager to read and comment on the
proposed standard.
Next up for both groups, after these standards pass, is negotiating
the IEEE standard through the shoals of ISO, both getting and staying
in sync with the various versions and updates of the base standard
(1003.1a, 1003.1b, and 9945-1), and language bindings to other
standards, like 1003.2 and 1003.4. (See my earlier discussion of dot
two.) Notice that they also have the burden of tracking their own
language standards. At least in the case of 1003.9, this probably
means eventually having to think about a binding to X3J3 (Fortran 90).
1003.6: Security
This group has filled the long-vacant post of technical editor, and,
so, is finally back in the standards business. In any organization
whose ultimate product is to be a document, the technical editor is a
key person. [We pause here to allow readers to make some obligatory
cheap shot about editors.] This is certainly the case in the POSIX
groups, where the technical editors sometimes actually write large
fractions of the final document, albeit under the direction of the
June, 1990 Standards Update Recent Standards Activities
- 9 -
working group.
I'm about to post the dot six snitch report, and don't want to give
any of it away, but will note that it's strongly opinionated and
challenges readers to find any non-DoD use for Mandatory Access
Control, one of the half-dozen areas that they're standardizing.
1003.7: System administration
This group has to solve two problems at different levels at the same
time. On the one hand, it's creating an object-oriented definition of
system administration. This high-level approach encapsulates the
detailed implementation of objects interesting to the system
administrator (user, file system, etc.), so that everyone can see them
in the same way on a heterogeneous environment. On the other hand,
the protocol for sending messages to these objects must be specified
in detail. If it isn't, manufacturers won't be able to create
interoperable systems.
The group as a whole continues to get complaints about its doing
research-by-committee. It's not even pretending to standardize
existing practice. I have mixed feelings about this, but am
unreservedly nervous that some of the solutions being contemplated
aren't even UNIX-like. For example, the group has tentatively
proposed the unusual syntax object action. Command names will be
names of objects, and the things to be done to them will be arguments.
This bothers me (and others) for two reasons. First, this confuses
syntax with semantics. You can have the message name first and still
be object-oriented; look at C++. Second, it reverses the traditional,
UNIX verb-noun arrangement: mount filesystem becomes filesystem mount.
This flies in the face of the few existing practices everyone agrees
on. I worry that these problems, and the resulting inconsistencies
between system administration commands and other utilities, will
confuse users. I have a recurring nightmare of a long line of new
employees outside my door, all come to complain that I've forgotten to
mark one of my device objects, /dev/null, executable.
With no existing practice to provide a reality-check, the group faces
an uphill struggle. If you're an object-oriented maven with a yen to
do something useful, take a look at what this group is doing, then
implement some of it and see if it makes sense. Look at it this way:
by the time the standard becomes reality, you'll have a product, ready
to ship.
1003.10: Supercomputing
This group is working on things many of us us old-timers thought we
had seen the last of: batch processing and checkpointing. The
supercomputing community, condemned forever to live on the edge of
what computers can accomplish, is forced into the same approaches we
used back when computer cycles were harder to come by than programmer
cycles, and machines were less reliable than software.
June, 1990 Standards Update Recent Standards Activities
- 10 -
Supercomputers run programs that can't be run on less powerful
computers because of their massive resource requirements
(cpu/memory/io). They need batch processing and checkpointing because
many of them are so resource-intensive that they even run for a long
time on supercomputers. Nevertheless, the supercomputing community is
not the only group that would benefit from standardization in these
areas. (See, for example, my comments on dot fourteen.) Even people
who have (or wish to have) long-running jobs on workstations, share
some of the same needs for batch processing and checkpointing.
Karen Sheaffer, the chair of dot ten, had no trouble quickly recasting
the group's proposal for a batch PAR into a proposal that passed the
SEC's PAR-approval criteria. The group is modeling a batch proposal
after existing practice, and things seem to be going smoothly.
Checkpointing, on the other hand, isn't faring as well. People who
program supercomputers need to have a way to snapshot jobs in a way
that lets them restart the jobs at that point later. Think, for
example, of a job that needs to run for longer than a machine's mean-
time-to-failure. Or a job that runs for just a little longer than
your grant money lasts. There are existing, proprietary schemes in
the supercomputing world, but none that's portable. The consensus is
that a portable mechanism would be useful and that support for
checkpointing should be added to the dot one standard. The group
brought a proposal to dot one b, but it was rejected for reasons
detailed in Paul Rabin's dot one report. Indeed, the last I heard,
dot-one folks were suggesting that dot ten propose interfaces that
would be called from within the program to be checkpointed. While
this may seem to the dot-one folks like the most practical approach,
it seems to me to be searching under the lamp-post for your keys
because that's where the light's brightest. Users need to be able to
point to a job that's run longer than anticipated and say,
``Checkpoint this, please.'' Requiring source-code modification to
accomplish this is not only unrealistic, it's un-UNIX-like. (A
helpful person looking over my shoulder has just pointed out that the
lawyers have declared ``UNIX'' an adjective, and I should say
something like ``un-UNIX-system-like'' instead. He is, of course,
correct.) Whatever the interface is, it simply must provide a way to
let a user point at another process and say, ``Snapshot it,'' just as
we can stop a running job with job control.
1003.12: Protocol-independent interfaces
This group is still working on two separate interfaces to the network:
Simple Network Interface (SNI) and Detailed Network Interface (DNI).
The January meeting raised the possibility that the group would
coalesce these into a single scheme, but that scheme seems not to have
materialized. DNI will provide a familiar socket- or XTI/TLI-like
interface to networks, while SNI will provide a simpler, stdio-like
June, 1990 Standards Update Recent Standards Activities
- 11 -
interface for programs that don't need the level of control that DNI
will provide. The challenge of SNI is to make something that's simple
but not so crippled that it's useless. The challenge of DNI is to
negotiate the fine line between the two competing, existing practices.
The group has already decided not to use either sockets or XTI, and is
looking at requirements for the replacement. Our snitch, Andy
Nicholson, challenged readers to find a reason not to make DNI
endpoints POSIX file descriptors, but has seen no takers.
1003.14: Multiprocessing
The multiprocessing group, which had been meeting as sort of an ad-hoc
spin-off of the real-time group, was given PAR approval at the April
meeting as 1003.16 but quickly renamed 1003.14 for administrative
reasons. They're currently going through the standard set of jobs
that new groups have to accomplish, including figuring out what tasks
need to be accomplished, whom to delegate them to, and how to attract
enough working-group members to get everything done. If you want to
get in on the ground floor of the multiprocessing standard, come to
Danvers and volunteer to do something.
One thing that needs to be done is liaison work with other committees,
many of which are attacking problems that bear on multiprocessors as
well. One example is dot ten's checkpointing work, which I talked
about earlier, Checkpointing is both of direct interest to dot
fourteen, and is analogous to several other problems the group would
like to address. (A side-effect of the PAR proliferation problem
mentioned earlier is that inter-group coordination efforts go up as
the square of the number of groups.)
1201: Windows, sort of
Okay, as a review, we went into the Utah meeting with one official
group, 1201, and four unofficial groups preparing PARs:
1. 1201.1: Application toolkit
2. 1201.2: Recommended Practice for Driveability/User Portability
3. 1201.3: User Interface Management Systems
4. 1201.4: Xlib
By the end of the week, one PAR had been shot down (1201.3), one
approved (1201.2), and two remained unsubmitted.
The 1201.4 par was deferred because the X consortium says Xlib is
about to change enough that we don't want to standardize the existing
version. I'll ask, ``If it's still changing this fast, do we want to
even standardize on the next version?'' The 1201.1 PAR was deferred
because the group hasn't agreed on what it wants to do. At the
June, 1990 Standards Update Recent Standards Activities
- 12 -
beginning of the week, the two major camps (OSF/Motif and OPEN LOOK)*
had agreed to try to merge the two interfaces. By mid-week, they
wouldn't even sit at the same table. That they'd struck off in an
alternative, compromise direction by the end of the week speaks
extremely highly of all involved. What the group's looking at now is
a toolkit at the level of XVT**: a layer over all of the current,
competing technologies that would provide portability without
invalidating any existing applications. This seems like just the
right approach. (I have to say this because I suggested it in an
editorial about six months ago.)
The 1201.3 PAR was rejected. Actually, 1201 as a whole voted not to
submit it, but the people working on it felt strongly enough that they
submitted it anyway. The SEC's consensus was that the field wasn't
mature enough to warrant even a recommended practice, but the work
should continue, perhaps as a UniForum Technical Committee. The study
group countered that it was important to set a standard before there
were competing technologies, and that none of the attendees sponsoring
companies would be willing to foot the bill for their work within
anything but a standards body. The arguments weren't persuasive.
The 1201.2 PAR, in contrast, sailed through. What's interesting about
this work is that it won't be an API standard. A fair fraction of the
committee members are human-factors people, and the person presenting
the PAR convinced the SEC that there is now enough consensus in this
area that a standard is appropriate. I'm willing to believe this, but
I think that stretching the net of the IEEE's Technical Committee on
Operating Systems so wide that it takes in a human-factors standard
for windowing systems is overreaching.
X3
There are other ANSI-accredited standards-sponsoring bodies in the
U.S. besides the IEEE. The best known in our field is the Computer
Business Equipment Manufacturers' Association (CBEMA), which sponsors
the X3 efforts, recently including X3J11, the ANSI-C standards
committee. X3J11's job has wound down; Doug Gwyn tells me that
there's so little happening of general interest that it isn't worth a
report. Still, there's plenty going on in the X3 world. One example
is X3B11, which is developing a standard for file systems on optical
disks. Though this seems specialized, Andrew Hume suggests in his
__________
* OSF/Motif is a Registered Trademark of the Open Software
Foundation.
OPEN LOOK is a Registered Trademark of AT&T.
** XVT is a trademark of XVT Software Inc.
June, 1990 Standards Update Recent Standards Activities
- 13 -
report that this work may eventually evolve into a standards effort
for file systems on any read-write mass storage device. See the dot-
four common reference ballot for the kind of feelings new file-system
standards bring out.
I encourage anyone out there on an X3 committee who thinks the
committee could use more user exposure and input to file a report.
For example, Doug Gwyn suggests that there is enough activity in the
C++ standards world to merit a look. If anyone out there wants to
volunteer a report, I'd love to see it.
June, 1990 Standards Update Recent Standards Activities
Volume-Number: Volume 20, Number 66barmar@Think.COM (Barry Margolin) (07/01/90)
Moderator!: Delete most of these lines (begin): Return-Path: <news@Think.COM> Sender: uunet!Think.COM!news Submitted-From: uunet!Think.COM!barmar (Barry Margolin) Moderator!: Delete most of these lines (end). From: barmar@Think.COM (Barry Margolin) In article <387@usenix.ORG> From: <jsh@usenix.org> > The problem being addressed is how to move all printable >strings out of our programs and into external ``message'' files so >that we can change program output from, say, English to German by >changing an environmental variable. Both examples you supplied were simply ways to look up strings to output in a database keyed on locale and an internal program string; they differ only in minor ways. Does either proposal address any of the *hard* issues? For instance, different languages have different pluralization rules; how do you internationalize a program that automatically pluralizes when necessary (I hate programs that say things like "1 files deleted")? Or what about differing word order; how would you internationalize printf("the %s %s", adjective, noun); so that it would look right in a language where adjectives follow nouns? -- Barry Margolin, Thinking Machines Corp. barmar@think.com {uunet,harvard}!think!barmar Volume-Number: Volume 20, Number 77
jason@cnd.hp.com (Jason Zions) (07/02/90)
From: Jason Zions <jason@cnd.hp.com> Regarding the Snitch Editor's fine report, in the section discussing 1003.12 comes the following sentence: > Our snitch, Andy Nicholson, challenged readers to find a reason not to > make DNI endpoints POSIX file descriptors, but has seen no takers. How about because it constrains implementations to make DNI kernel-resident? How about because the semantics of operations permitted on POSIX file descriptors are a poor match for many transport providers? Read()/write() are stream operations; only TCP is a stream transport provider. OSI TP0/2/4 maps much more closely to stdio and fgets()/fputs() in that it is record-oriented. What does it mean to seek() on a network endpoint? A significant branch of the UNIX(tm)-system and POSIX research community believes "All the world's a file"; the Research Unix V.8 and Plan 9 folks are among the leaders here. I feel only slightly squeemish about accusing them of having only a hammer in their toolbelt; of *course* everything looks like a nail! I think it would probably be acceptable to have a DNI function which accepted a DNI endpoint as argument and attempted to return a real file descriptor. This function would check to see that the underlying transport provider could present reasonable semantics through a POSIX file descriptor, and would also check that the implementation supported access to that transport provider through a kernel interface. Jason Zions * UNIX is a trademark of AT&T in the US and other countries. ** Obstreperous iconoclast is a behavioral trademark of Jason Zions in the US and other countries. Volume-Number: Volume 20, Number 85
guy@auspex.uucp (Guy Harris) (07/03/90)
From: guy@auspex.uucp (Guy Harris) >Both examples you supplied were simply ways to look up strings to output in >a database keyed on locale and an internal program string; they differ only >in minor ways. Does either proposal address any of the *hard* issues? For >instance, different languages have different pluralization rules; how do >you internationalize a program that automatically pluralizes when necessary >(I hate programs that say things like "1 files deleted")? Or what about >differing word order; how would you internationalize > > printf("the %s %s", adjective, noun); > >so that it would look right in a language where adjectives follow nouns? The latter can addressed by a scheme like the X/Open NLS scheme, in which "printf" arguments can be decorated by specifiers that say which of the N arguments to "*printf" following the format string should be used; the "the %s %s" would have to replace "%s %s" with "%$2s %$1s". HOWEVER: This does *NOT* do anything about the pluralization rules. It *also* does nothing about the fact that the correct translation of "the" could depend on the noun in question; i.e., is it "la" or "le" in French? I think that, for reasons such as these, the only solution to the problem of trying to find a Magic Bullet so that you can trivially internationalize the message-printing code of applications by throwing a simple-minded wrapper around "printf" (whether the #define approach, or replacing the format string with "getmsg(the format string)", or whatever) is to have software that is sufficiently knowledgable about *all* human languages supported that it knows the gender of all nouns you'll use in your messages, and knows the right articles for those genders (for all cases the language has), and knows how to pluralize arbitrary words. In fact, I'm not even sure *that's* sufficient; I only know about some Indo-European languages, and other languages may throw in problems I haven't even considered. In other words, I don't think there's a solution to the problem of "oh dear, how are we going to get all our applications modified to put out grammatically-correct messages in different languages without having to examine all the code that generates messages and possibly rewrite some of that code" other than teaching the system a fair bit about lots of human languages. I don't think you can even come up with an approach that's close enough to a solution to be interesting. I'm afraid you're just going to have to fall back on things such as: having "1 frob" and "%d frobs" be *two* separate messages in the message catalog; having "the chair" and "the table" either be two separate messages, rather than having "the %s" and foreign-language versions of same, or having the message be "%s %s" and have the database tie the noun and the article together (watch out for Russian, though, they don't *use* articles...); etc.. Yeah, this may mean human intervention, rather than being able to internationalize your messages by running just running a few programs over the code; nobody ever said that life was fair. Might as well bite the bullet.... Volume-Number: Volume 20, Number 86
guy@auspex.uucp (Guy Harris) (07/04/90)
From: guy@auspex.uucp (Guy Harris) >How about because the semantics of operations permitted on POSIX file >descriptors are a poor match for many transport providers? Read()/write() >are stream operations; only TCP is a stream transport provider. OSI TP0/2/4 >maps much more closely to stdio and fgets()/fputs() in that it is >record-oriented. Standard I/O, and "fgets()"/"fputs()" in particular, are record-oriented? News to me; I thought standard I/O offered byte streams, and "fgets()" read stuff from that stream until it hit a newline or EOF, and "fputs" put bytes from a string out onto that stream. For that matter, raw magtapes are also record oriented, and "read()" and "write()" work fine on them. I don't see the problem with TPn; a single "write()" could either be turned into one packet, or broken up arbitrarily into N packets if there's a maximum packet size. If you *want* to have a correspondence between "send it" calls and records, I see no problem with providing additional calls to do that, but I also don't see any problem with hiding record boundaries, if necessary, from applications that *want* to just send byte streams over TPn. >What does it mean to seek() on a network endpoint? What does it mean to "seek()" on a tty? Volume-Number: Volume 20, Number 96
peter@ficc.ferranti.com (peter da silva) (07/04/90)
From: peter@ficc.ferranti.com (peter da silva) In article <770@longway.TIC.COM> From: jason@cnd.hp.com (Jason Zions) > Read()/write() are stream operations; only TCP is a stream transport > provider. OSI TP0/2/4 maps much more closely to stdio and fgets()/fputs() > in that it is record-oriented. The same is true of a UNIX tty device with canonical processing. So? > What does it mean to seek() on a network endpoint? What does it mean to seek() on a pipe? -- Peter da Silva. `-_-' +1 713 274 5180. <peter@ficc.ferranti.com> Volume-Number: Volume 20, Number 94
gwyn@smoke.brl.mil (Doug Gwyn) (07/04/90)
From: Doug Gwyn <gwyn@smoke.brl.mil> >From: guy@auspex.uucp (Guy Harris) >In other words, I don't think there's a solution to the problem of "oh >dear, how are we going to get all our applications modified to put out >grammatically-correct messages in different languages without having to >examine all the code that generates messages and possibly rewrite some >of that code" other than teaching the system a fair bit about lots of >human languages. Might as well leave out the clause "other than ...". Perhaps we could persuade those who think there should be a simple rule for writing just one message text when programming for a variety of cultures to use Esperanto for their messages. At least that way they will be understood just as readily by all customers.. :-) Volume-Number: Volume 20, Number 92
henry@zoo.toronto.edu (Henry Spencer) (07/04/90)
From: henry@zoo.toronto.edu (Henry Spencer) >From: Jason Zions <jason@cnd.hp.com> >How about because it constrains implementations to make DNI >kernel-resident? Nonsense. Nothing in 1003.n constrains implementations to make anything kernel-resident. Things like read() are functions, which may or may not reflect the primitives of the underlying kernel. They are merely required to communicate -- somehow -- with something that performs the required services. Why have two different kinds of endpoints for I/O? We already have one which is general enough to encompass almost every kind of I/O under the sun. >How about because the semantics of operations permitted on POSIX file >descriptors are a poor match for many transport providers? Read()/write() >are stream operations; only TCP is a stream transport provider. OSI TP0/2/4 >maps much more closely to stdio and fgets()/fputs() in that it is >record-oriented. What does it mean to seek() on a network endpoint? Read()/write() are stream operations that work perfectly well as record operations too. As witness Unix ttys, which are record-oriented devices on input, and Unix magtapes, which are record-oriented both ways. As for what it means to seek on a network endpoint, exactly the same as it means to seek on a tty: probably nothing. But why invent new mechanisms for I/O when the old ones will do perfectly well? "Don't fix it if it ain't broken." Henry Spencer at U of Toronto Zoology henry@zoo.toronto.edu utzoo!henry Volume-Number: Volume 20, Number 93
karl@IMA.IMA.ISC.COM (Karl Heuer) (07/05/90)
From: karl@IMA.IMA.ISC.COM (Karl Heuer) In article <778@longway.TIC.COM> henry@zoo.toronto.edu (Henry Spencer) writes: >As for what it means to seek on a network endpoint, exactly the same as it >means to seek on a tty: probably nothing. Better yet, it should return an error (like an attempt to seek on a pipe). I don't think there's any excuse for tty seek having been defined as a no-op in the first place; it's too bad POSIX didn't require this to be fixed. (Is there any reliable way to tell whether a given fd is seekable?) Volume-Number: Volume 20, Number 98
jsh@usenix.org (Jeffrey S. Haemer,) (10/12/90)
Submitted-by: jsh@usenix.org (Jeffrey S. Haemer,)
An Update on UNIX1-Related Standards Activities
October 11, 1990
USENIX Standards Watchdog Committee
Jeffrey S. Haemer, jsh@ico.isc.com, Report Editor
Summer-Quarter Standards Activities
This editorial addresses some of the summer-quarter standards activi-
ties covered by the USENIX Standards Watchdog Committee.2 In it, I've
emphasized non-technical issues, which are unlikely to appear in offi-
cial minutes and mailings of the standards committees. Previously
published watchdog reports give more detailed, more technical sum-
maries of these and other standards activities. If my comments move
you to read one of those earlier reports that you wouldn't have read
otherwise, I've done what I set out to do. Of course, on reading that
report you may discover the watchdog's opinions differ completely from
the ones you see here. As watchdog editor I edit the reports, I don't
determine their contents. The opinions that follow, in contrast, are
mine.
Profiles
There's an explosion of activity in the profiles world, bringing with
it an explosion of problems, and dot zero, the POSIX guide group, is
at ground zero.3 The first problem is, ``What's a profile?'' Everyone
has a rough idea: it's a document that specifies an application-
specific set of standards (or pieces of standards). The best informal
illustration I've heard is from Michele Aden, of Sun Microsystems.
Imagine, she says, you have to write a guideline for buying lamps for
Acme Motors. You might require that the lamps have ANSI-standard,
three-prong plugs, accept standard one-way, hundred-watt bulbs, have
cords no shorter than five feet, and stand either two- to three-feet
tall (desk models) or five- to seven-feet tall (floor-standing
models). This combination of pointers to standards, additional
specifications, and detailed options, which gives purchasing agents
__________
1. UNIXTM is a Registered Trademark of UNIX System Laboratories in
the United States and other countries.
2. A companion article provides a general overview of the committee
itself.
3. I use ``dot zero'' to refer both to the P1003.0 working group and
to the document it's producing. These are common conversational
conventions among standards goers, and which of the two I mean is
usually obvious from context.
October 11, 1990 Standards Update Recent Standards Activities
- 2 -
guidelines to help them make choices without tying their hands to a
specific vendor, is a profile - in this case, an Acme Motors lamp pro-
file. Dot zero now sees itself as a group writing a guide to help
profile writers pick their way through the Open-Systems'-standards
maze.
But that rough agreement is as far as things go. And the standards
world is never informal. For ``profile'' to graduate from a hallway-
conversation buzzword to an important organizing principle, it needs a
precise definition. And since there are already four groups writing
profiles - real-time, transaction processing, multiprocessing, and
supercomputing - TCOS needs to figure out what a profile is pretty
quickly. ISO already has IAPs, International Applications Profiles.
The ISO document TR 10K describes these in detail. Unfortunately, TR
10K was developed for OSI-related profiles and shows it. Cut-down
extracts of the standard appear in the document. Someone needs to
define a PAP, a POSIX Application Profile.
But that's just the first problem. Even thornier is the question
``What does it mean to say that something conforms to the POSIX
transaction-processing profile?'' If I want to write assertions for a
profile or tests to verify those assertions, how do I do it? Does it
suffice to conform to the individual components? What about their
interactions? The first principle of management is ``If it ain't
somebody's job, it won't get done.'' Dot zero has done such a good
job of promoting The Profile as an organizing principle for addressing
standards issues that people are beginning to press dot zero for
answers to questions like these. Unfortunately, that's a little like
killing the messenger. It's just not dot zero's job. So the funda-
mental profile question is ``Who's in charge?'' Right now, I think
the question sits squarely, if uncomfortably, in the lap of the SEC -
the Sponsors Executive Committee, which oversees the IEEE's
operating-systems activities.
In the meantime, the various working groups writing profiles are mak-
ing headway by just trying to define profiles and seeing where they
get stuck. Dot twelve, the real-time profile group, is busily making
various sorts of tables, to try to find a reasonable way to specify
the pieces that make up a profile, their options, and their interac-
tions. Dot ten, the supercomputing profile group, is seeking an
overall structure for a profile document that makes sense. Dot
eleven, the transaction-processing profile group, is trying to steal
from dots twelve and ten, an important test of the generality of the
other two groups' solutions. Dot fourteen, the multiprocessing pro-
file group, isn't far enough along to make theft worth their while,
but will eventually provide a second generality test. Think of it as
a problem in portable ideas.
October 11, 1990 Standards Update Recent Standards Activities
- 3 -
Will_I_Win_My_Beer?
In my last editorial, I announced a beer bet with John Gertwagen over
whether threads will ballot and pass before the base dot-four (real-
time) ballot objections are resolved. I'm still betting on threads,
but it looks like the bet is still anyone's to win. Some folks assure
me that I'll win my beer handily, others say I don't have a chance.
At the summer POSIX meetings, in Danvers, Massachusetts, the dot-four
chair, Bill Corwin, challenged the threads folks to come up with a
ballotable draft by the end of the week, and they very nearly did. (I
hear complaints from some quarters that the the vote to go to ballot
was 31 to 7 in favor, and that attempts to move to balloting were only
blocked because of filibusters from those opposed.) On the other
hand, technical reviewers are now resolving ballot objections to the
base with machetes. They've thrown away asynchronous events alto-
gether and have discarded real-time files and adopted the mmap model
that the balloting group suggested.4 Dot four has moved from ``design
by working committee'' to ``design by balloting committee.''
Innovation
C.A.R. Hoare once said, ``One thing [the language designer]
should not do is to include untried ideas of his own.'' We have
followed that precept closely. The control flow statements of
Ratfor are shamelessly stolen from the language C, developed for
the UNIX operating system by D. M. Ritchie.
- Kernighan and Plauger5
Should standards groups just standardize existing practice, or should
they be solving known problems? And if they solve known problems, how
much innovation is allowed? Shane McCarron's September UNIX/Review
article6 uses the real-time group, dot four, as a focus for an essay
on this subject. His thesis is that standards bodies should only be
allowed to standardize what's boring. I've already seen John
Gertwagen's reply, which I assume will be printed in the next issue.
__________
4. Dot four's real-time files are currently a part of the
supercomputing profile. If they disappear from dot four, they may
reappear elsewhere.
5. Kernighan, Brian and Peter Plauger, Software Tools, Addison-
Wesley, 1979, p. 318.
6. McCarron, Shane, ``Commodities, Standards, and Real-Time
Extensions,'' UNIX Review, 8(9):16-19 (1990).
October 11, 1990 Standards Update Recent Standards Activities
- 4 -
I find myself agreeing (and disagreeing) with both and recommend you
read them.
This battle will rage brighter in some of the groups less far along,
but sporadic fighting still breaks out in the shell and tools group,
dot two. Right now, collation and character classification are seeing
a lot of skirmishing. Some want to stay relatively close to the
existing practice, while others want to grow a mechanism to deal with
the Pandora's box of internationalization. My favorite current exam-
ple, though, is make. Bradford's augmented make is almost a decade
old. Stu Feldman's original is a couple of years older than that.
That decade has seen a number of good make replacements, some of them
wildly successful: Glenn Fowler's nmake has virtually replaced make
for large projects in parts of AT&T. Still, many of these upgrades
maintain the original make model,7 just patching up some of make's
more annoying craters and painting over its blemishes. At this point,
there is real consensus among make augmentors about some patches.
Most upgrades expand make's metarules. For example,
.c.o:
$(CC) $(CFLAGS) $<
might become
%.c : %.o
$(CC) $(CFLAGS) $<
Not much of a change, but it also gives us
s.% : %
$(GET) $(GFLAGS) -p $< > $>
...
in place of the current, baroque
__________
7. Fowler, Glenn, ``A Case for make,'' Software-Practice and
Experience, 20: S1/35-S1/46 (1990).
October 11, 1990 Standards Update Recent Standards Activities
- 5 -
.c~.o:
$(GET) $GFLAGS) -p $< > $>
...
Make's successors don't agree on syntax, but they all agree agree that
``~'' rules are the wrong solution to a real problem. Should dot two
standardize a newer solution? Existing-practice dogmatists would say,
``No. It's not make.'' Here's a place I say, ``Yes - if we can do it
in a way that doesn't break too many makefiles.'' The prohibition
should be against untried ideas, and I don't see those here. A year
or so ago, Stu Feldman (make), Glenn Fowler (nmake), Andrew Hume (mk),
and a handful of other make luminaries presented a proposal to add
four extensions to dot two's make. Not one is yet in the draft. I
hope that changes.
SCCT_Faces_Serious_Problem
At Danvers, the testing group, dot three, worked with dot two on test
assertions to try to avoid the kinds of problems created by the
P1003.1 test assertions, which dot one had no input into until the
assertions were in ballot.
A side effect of the collaboration, which is taking place before dot
two is finished, is that it may reveal that parts of dot two are
imprecise enough to require a rewrite. Dot two, draft eight had
around four-hundred ballot objections, draft nine saw fewer than half
that number. There was hope that draft ten would halve that again,
bringing it within striking distance of being a standard8 The asser-
tion work may point out and clear up rough spots that might otherwise
have escaped the notice of battle-fatigued balloters. (Paradoxically,
NIST, which is heavily represented in dot three and painfully familiar
with dot two's status and problems, is currently pushing for a shell-
and-tools FIPS based on the now-out-of-date draft nine.)
The exercise of trying to construct assertions for dot two before it
goes to ballot may bring some new testing problems into focus, too.
Before I explain what I mean, I'll give you a little background.
The POSIX effort has outgrown dot three, which did test assertions for
dot one and is in the process of constructing test assertions for dot
two. Dot three has, at most, a couple of dozen members, and the docu-
ment for dot two alone may swell to one- or two-thousand pages.9 If
__________
8. It didn't reach that goal. Keith Bostic tells me he submitted 132
objections himself.
October 11, 1990 Standards Update Recent Standards Activities
- 6 -
dot three were to continue to do all test assertion work, it would
have to produce a similar document for at least a dozen other stan-
dards.
Reacting to this problem, the SEC created a steering committee, the
SCCT, to oversee conformance testing. The committee's current plan is
to help guide standards committees to write their own assertions,
which will be part of the base document. Test assertions, like
language independence, are about to become a standards requirement (a
standards standard).
With this change, the current process - write a base document, evolve
the base document until it's done, write test assertions for the
result, evolve the test assertions until they're done - would become:
write a base document with test assertions, then evolve the base docu-
ment modifying the test assertions as you go. A sensible-enough idea
on the surface, but after the joint dot-two, dot-three meeting I have
questions about how deep that sense runs.
First, does it really make sense to write assertions early? Working-
group members should be exposed to assertion writing early; when
working-group members understand what a testable assertion is, it's
easier to produce a testable document. Still, substantive, major
draft revisions are normal, (see the real-time group's recent ballot,
for example) and keeping test assertions up-to-date can be as much
work as writing them from scratch. This meeting saw a lot of review
of draft-nine-based assertions to see which ones had to change for
draft ten.
Second, if you make the assertions part of the standard, they're voted
on in the same ballot. Are the same people who are qualified to vote
on the technical contents qualified to vote on the test assertions?
Third, writing good assertions is hard, and learning to write them
takes time. How eager will people in working groups be to give up
time they now spend writing and revising document content in order to
do assertions?
Fourth, is the time well-spent? Not everything merits the time and
expense of a standard. If only a small number of organizations will
ever develop test suites for a particular standard (with none being a
______________________________________________________________________
9. Any imagined glamour of POSIX meetings fades rapidly when one is
picking nits in a several-hundred-page standards document. When
asked where the next meeting was, one attendee replied, ``some
hotel with a bunch of meeting rooms with oversized chandeliers and
little glasses full of hard candies on the tables.''
October 11, 1990 Standards Update Recent Standards Activities
- 7 -
special, but important case) does it make sense for folks to spend
time developing standards for those test suites? Wouldn't it make
more sense to develop it after there is a clear need? (This is a per-
verse variant of the ``existing practice'' doctrine. Even if you
don't think standards should confine themselves to existing practice,
does it make sense to innovate if there's never going to be an exist-
ing practice?)
Stay_Tuned_for_This_Important_Message
If you haven't yet had the pleasure of internationalizing applica-
tions, chances are you will soon. When you do, you'll face messaging:
modifying the application to extract all text strings from external
data files. The sun is setting on
main()
{
printf("hello, world\n");
}
and we're entering a long night of debugging programs like this:
#include <stdio.h>
#include <nl_types.h>
#include "msg.h" /* decls of catname(), etc. */
#define GRTNG "hello, world\n"
nl_catd catd;
main()
{
setlocale(LC_ALL, "");
catd = catopen(catname(argv[0]), 0);
printf(catgets(catd, SETID, MSGID, GRTNG));
catclose(catd);
exit(0);
}
This, um, advance stems from a desire to let the program print
ch`o c'c ^ng
instead of
hello, world
when LANG is set to ``Vietnamese.''
October 11, 1990 Standards Update Recent Standards Activities
- 8 -
Most programs use text strings, so the system services interface
group, dot one, has been thinking about portable library calls to sup-
ply such strings and portable formats for the files that contain them.
Actually, ``re-thinking'' is probably more accurate than ``thinking
about.'' 1003.1a Draft 9, specified a design by the UniForum Techni-
cal Committee on Internationalization. At Danvers, X/Open counter-
proposed a variant of its existing XPG3 specification, arguing that
the X/Open scheme may have problems but it also has users, while the
UniForum proposal is still in the laboratory. (It brings to mind the
apocryphal story of Stu Feldman's wanting to improve the design of
make, but feeling he couldn't because he already had seven users.)
Someone from Unisys also brought a proposal, different from both
UniForum's and X/Open's.
That no one even showed up to defend the UniForum proposal shows that
there is something wrong with standardizing messaging. One minute,
there is enough support for a messaging scheme to get it into the
draft standard; the next, there's none at all. In the end, the work-
ing group agreed that a messaging standard was premature and that the
free market should continue to operate in the area for a while.
Given the relative sizes of the organizations concerned, this outcome
probably sticks us with the X/Open scheme for a while, which I find
the ugliest of the lot. Still, it's not a standard, and there's room
for innovation and creativity if we're quick about it. The ``existing
practice'' criterion is supposed to help avoid a requirement for mas-
sive, murderous source code changes. We should be looking for the
messaging scheme that doesn't require changes in the first place, not
the one with the most existing victims.
Language_Independence_Stalls_ISO_Progress
Internationally, 1003.4 (real-time), 1003.5 (Ada bindings), and 1003.9
(FORTRAN bindings) are being held hostage by ISO, which refuses to
loose them on the world until we come up with a language-independent
binding for 1003.1. The question is, who will do the work? ``Not
I,'' says dot four, whose travel vouchers are signed by companies
caught up in the glamour of real-time and threads; ``Not I,'' say dot
five and dot nine, who seldom have even ten working-group members
apiece; ``Not I,'' say the tattered remnants of dot one, exhausted
from struggling with 1003.1-1988, FIPS-151 and 151-1, and (almost)
1003.1-1990, before any other groups have even a first standard
passed. Where is the Little Red Hen when we need her?
Should_We_Ballot_POSIX_the_Way_We_Ballot_Three-Phase_Power?
In the meantime, we progress inexorably toward balloting on several
IEEE/ANSI standards. The sizes of the drafts (and several contribu-
tors to comp.std.unix) raise real questions about whether the IEEE's
balloting process make sense for the sort of standards work POSIX is
October 11, 1990 Standards Update Recent Standards Activities
- 9 -
performing. A month or so might be enough to review a few-page
hardware standard. But is it enough for the nearly 800 pages in the
latest recirculation of dot two? Does it really make sense to review
the standard for grep in hard copy? Many would like to see longer
balloting times and on-line access to drafts. Some argue that the
final standard should be available only from the IEEE, both to insure
authenticity and to provide the IEEE with income from its standards
efforts; even that argument seems weak. Checksums can guarantee
authenticity, and AT&T's Toolchest proves that electronic distribution
works: I'll bet ksh has paid for itself several times over.
``We_handed_1201.1_its_head_and_asked_it_to_comb_its_hair.''
Moving away from POSIX, we come upon 1201.1, still in search of an
officially sanctioned mission that the group wants to take on. The
group currently has a PAR (charter) to standardize various aspects of
X-based windowing - principally the toolkit-level API - but any hope
of compromise between the OPEN LOOK and OSF/Motif factions died at the
winter-quarter Utah meetings. In a moment of responsible, adult
behavior, the group recovered by switching to a dark horse: a
window-system-independent API that could be implemented on top of
either product. Marc Rochkind's XVT, which already allows users to
write programs that are portable across several, unrelated window sys-
tems including X, the Mac, and MS-Windows, was offered as a proof-of-
concept.
While the original charter could probably encompass the new XVT work,
the group seemed to feel that this direction change, together with the
fragmenting of the original group into separate toolkit, drivability,
UIMS, and X intrinsics efforts, required that they ask the SEC for a
new charter. (The drivability group has already had a separate PAR
approved and is now 1201.2.) The group convened a pair of interim
meetings in Milpitas, California, and Boulder, Colorado, to forge a
PAR that would meet the SEC's new, stricter standards for PAR approval
by the summer Danvers meeting. They didn't succeed.
Most of the problems seem to have been administrative missteps. Some
examples:
- Working-group members complained that the Milpitas meeting took
place without enough notice for everyone to attend, and issues
that had been resolved at the interim meetings were re-opened in
Danvers.
- The PAR was so broadly written that at least one technology (Ser-
pent) was advanced as a candidate that almost no one thought
should even be considered.
- Some working-group members hadn't even received copies of the XVT
reference manual before they reached Danvers.
October 11, 1990 Standards Update Recent Standards Activities
- 10 -
- Many SEC members appeared not to have seen a copy of the PAR
until the afternoon before the SEC meeting, and some saw the
final PAR for the first time at the SEC meeting itself.
Many people who weren't familiar with the proposal ended up uneasy
about it, not because they'd read it and didn't like it - they'd not
been given much chance to read it - but because a lack of attention to
administrative details in the proposal's presentation sapped their
confidence in the group's ability to produce a sound standard. After
all, standards work is detail work. In the end, the SEC tactfully
thanked the group and asked them to try again. One SEC member said,
``We handed 1201.1 its head and asked it to comb its hair.''
I believe all of this is just inexperience, not a symptom of fundamen-
tal flaws in the group or its approach. If 1201.1 can enlist a few
standards lawyers - POSIX has no shortage of people who know how to
dot all the i's and cross all the t's - and can muster the patience to
try to move its PAR through methodically and carefully, I think the
group will give us a standard that will advance our industry. If it
doesn't do so soon, though, the SEC will stop giving it its head back.
POSIX_Takes_to_the_Slopes
Finally, I want to plug the Weirdnix contest. We currently have a
great prize- including a ski trip to Utah - and only a few entries.10
The contest closes November 21, 1990. Send your entries to me,
jsh@ico.isc.com.
__________
10. The occasionally heard suggestion that Brian Watkins was found
clutching Mitch Wagner's entry is tasteless; it is almost - but,
luckily, not quite - beneath me to repeat it.
October 11, 1990 Standards Update Recent Standards Activities
Volume-Number: Volume 21, Number 198