[comp.unix.wizards] ABIs and the futurrrr of UNIX

dave@micropen (David F. Carlson) (03/23/88)

This is a reposting as a machine ate the first try for most locales:


There has been much UNIX news recently on the subject of ABI (Application Binary
Interface) standard which AT&T along with Motorola, SUN and Intel are setting.
If I understand the problem, as it is now each UNIX vendor for any machine
they sell UNIX for is responsible for defining a binary protocol for things such
as alignment (and/or packing), traps to kernel with associated arguments, etc.
This means that each vendor of 680x0 or SPARC or 80x86 can potentially define
a proprietary binary format which renders their binaries to be only executable
on their version of UNIX.  Or in other words, there may be no big win derived
from building/buying a system from standard hardware (680x0) over any vendor's
proprietary hardware.  Thus, the key to UNIX portability is high level language,
not binary standards that other operating system / hardware vendors use to 
keep customers within the fold.  Remember not too many years ago when you told
your IBM rep that VM/370 binary standard or VAX/11 VMS binary standard were
not in your long term best interest but in theirs?  That their claims of
portability across *their* machine line would not lead to long term 
portability and target machine independence that modern software engineering
and marketing reality dictates one would like to achieve.

The trick to the UNIX portability (for SystemV only) was to take your *source*
code and compile on someone elses machine.  If properly coded, portability
within SystemV is not very difficult to achieve.  However, no vendor wants
to support thousands of variants and few vendors want distribute their 
proprietary source code to all comers.  (Public domain codings of XWindows, 
USENET Netnews, etc. illustrate that real portability in high level language
is not a LaManchian dream.  It is the proprietary nature of software that 
is the bone in the throat of software houses for source distribution.)

The AT&T ABI seems to be an attempt to allow at least partial standards for
machine binary interchange (restricted to processor families.)  What this
in effect will say is that there are preferred hardware platforms for UNIX,
the portable operating system, in that software vendors using popular ABI 
platforms will be able to sell more software that those using perhaps
technically superior but not a popular ABI would.  In turn, buyers will
bypass a technically superior solution in favor of a popular ABI option
solely because of binary interchange:  exactly why we rejected popular software
platforms to choose UNIX in the first place.

The big winners seem to be software vendors who would no longer have to pay 
for multi-vendor ports and support, and hardware vendors of popular ABI 
machines (in all likelihood Motorola, Intel and SUN.)  So SUN and AT&T
are the really big winners.  The losers are other UNIX vendors for less 
popular platforms like Amdahl, CRAY, HP, MIPS, Allient, AMD29000-based, 
NS32XXX-based, Multiflow, and even the venerable VAX.  The VAX will be 
the really big loser:  DEC is so ancy about systemV they may reject the 
whole idea.  Furthermore AT&T has announced that the VAX will no longer 
be a primary port engine:  SUN's SPARC will be.  No sooner than these ABIs 
become real standards will vendors stop supporting non-ABI architectures.  
(For example, will SUN sell XXXX source when the three major ABI's allow 
them to support the three machine types they sell by binary interface alone?
It seems unlikely.  Note that Intel and Motorola ABIs would cover 95%+ of
all UNIX sites.)

I believe what we all seek is a means of portability across machines lines
without having to support N-machines to sell a product.  Parts of this are in
place:  COFF has conversion routines for correctly ordering big-endian vs. 
little-endian data sections.  Why can't a machine independent intermediate
form be developed for UNIX solely to be translated into native binary on the
target machine by a similar utility?  This form would have to be opaque 
enough to discourage un-compiling but adaptable enough to allow for tight 
native translation on any SystemV (and eventually POSIX) machine.  Perhaps
a meta-assembler language such as the DoD CORE set as a possible portable 
target code for PCC.  Or perhaps even some intermediate PCC form that a code
generator fixes on the target.  The form should not preclude typical machine 
dependent optimizations and data packing.

There are very good reasons for wanting high-level language portability across
machines.  In particular, having hardware vendor independence and the ability
to choose a not-so-popular but technically superior hardware platform without
forfeiting hardly *anything*.  The reasons for ABI are to facilitate hardware
dependent portability.  The cost of ABIs may be setting up a "preferred" UNIX
hardware platform or platforms which, as a standard, could preclude 
consideration of a non-ABI platforms because of the manifest advantages of 
the preferred, albeit perhaps technically inferior, (ABI) platform(s).

-- 
David F. Carlson, Micropen, Inc.
...!{ames|harvard|rutgers|topaz|...}!rochester!ur-valhalla!micropen!dave

"The faster I go, the behinder I get." --Lewis Carroll

gwyn@brl-smoke.ARPA (Doug Gwyn ) (03/24/88)

In article <431@micropen> dave@micropen (David F. Carlson) writes:
>What this
>in effect will say is that there are preferred hardware platforms for UNIX,
>the portable operating system, in that software vendors using popular ABI 
>platforms will be able to sell more software that those using perhaps
>technically superior but not a popular ABI would.  In turn, buyers will
>bypass a technically superior solution in favor of a popular ABI option
>solely because of binary interchange:  exactly why we rejected popular software
>platforms to choose UNIX in the first place.

I think you're drawing false conclusions, perhaps because you're working
from false premises.  There is no reason that other architecture families
could not also follow their own binary standards, and in fact there are
efforts underway to do so for some families, for example the 386.  The
only real significance for UNIX as such is that the porting base will
change from 3B2 to SPARC, which may result in the sale of a few more
Sun-4s to UNIX VARs but otherwise is of little consequence for most VARs.

Arguing that people will buy ABI products because it is to their benefit
to do so seems pretty strange -- of course they would.  There is much
more to system evaluation than "technical superiority", as witness the
IBM PC.

>Furthermore AT&T has announced that the VAX will no longer 
>be a primary port engine:  SUN's SPARC will be.

I'm afraid you're behind the times.  The porting base has been 3B2
for quite some time.

>No sooner than these ABIs 
>become real standards will vendors stop supporting non-ABI architectures.  

No, vendors will stop supporting architectures only when they cease to
be a significant segment of the market.  Even on the improbable
assumption that SPARC will take the world by storm, it would be many
years before it would substantially displace current architectures.

>COFF has conversion routines for correctly ordering big-endian vs. 
>little-endian data sections.

COFF files imported from a system with the wrong byte order are unusable.
I had to write a COFF converter to deal with this.  (The one AT&T has in
their SGS only works on the source machine, not on the destination.)
COFF also falls apart miserably on 64-bit machines etc.  It was a nice
attempt but it needs improvement.

mishkin@apollo.uucp (Nathaniel Mishkin) (03/24/88)

In article <431@micropen> dave@micropen (David F. Carlson) writes:
>I believe what we all seek is a means of portability across machines lines
>without having to support N-machines to sell a product.  Parts of this are in
>place:  COFF has conversion routines for correctly ordering big-endian vs. 
>little-endian data sections.  Why can't a machine independent intermediate
>form be developed for UNIX solely to be translated into native binary on the
>target machine by a similar utility? 

Nice idea, but I'm dubious that all the people who are inventing and
implementing new instruction architectures would be able to shoehorn
in all their compiler and architectural smartness into a "universal"
intermediate form.

What mystifies me about this whole ABI business is not so much the desire
for a set of ABIs, one per low-level hardware architecture, but the idea
that some people (Sun? AT&T?) appear to express for a *single* ABI based
on a single architecture.  I mean, is the world really ready to standardize
on any single architecture that exists today?  It just seems absurd to
me.  If the world had standardized on a single architecture just a few
years ago, some of the recent fairly radical, but apparently successful
architectural ideas (e.g. Multiflow's VLIW) might never have made it
into the real world.  Is it really in the long term interest of end-users
to run the risk of stifling that sort of development?  Or am I being
excessively paranoid about all this?

bzs@bu-cs.BU.EDU (Barry Shein) (03/25/88)

Hey, I hear that UniSys has agreed to use the SPARC in future
products.

I heard years ago that when IBM first started selling computers people
referred to them as "IBM Univacs", Univac had become generic for
"computer". I wonder if this means we now get "Sun Univacs"?

	-Barry ":-)" Shein, Boston University

bzs@bu-cs.BU.EDU (Barry Shein) (03/25/88)

From Nathaniel Mishkin
>What mystifies me about this whole ABI business is not so much the desire
>for a set of ABIs, one per low-level hardware architecture, but the idea
>that some people (Sun? AT&T?) appear to express for a *single* ABI based
>on a single architecture.  I mean, is the world really ready to standardize
>on any single architecture that exists today?  It just seems absurd to
>me.  If the world had standardized on a single architecture just a few
>years ago, some of the recent fairly radical, but apparently successful
>architectural ideas (e.g. Multiflow's VLIW) might never have made it
>into the real world.  Is it really in the long term interest of end-users
>to run the risk of stifling that sort of development?  Or am I being
>excessively paranoid about all this?

To some extent I think you're being excessively paranoid tho the
concern is healthy. I don't know think that the intent is to define
"the" ABI, but rather "an" ABI.

The motivation, of course, is to encourage the third-party software
developers to sell binary "bubble-pak" software for Unix. Just like
they could for the IBM/PC, another "ABI" (Unix, until now, has only
been a source standard.)

Obviously in there lies some sort of zero-sum game. If there are
dozens of (accepted by the market) ABI's it sort of nullifies the
advantage of having any. If there are a few I guess it still works,
particularly if the architectures don't overlap entirely.

I don't think it distracts from developmental efforts necessarily,
they remain where they are today in most ways. In some ways they're
better off as now people like Microsoft have been encouraged to port
their code to Unix whereas before they stayed in the comfortable
domains of IBM/PCs and Macs, let's face it, for whatever reason these
guys required an ABI. So there's a better chance of going to them and
convincing them and other vendors to recompile their software for your
machine, and setting up some marketing agreement, assuming it takes
little more than a recompile to port to your whiz-bang architecture.

I would look at it as more of a new dimension, rather than detracting
from the current situation, one which a given vendor may or may not be
able to participate in and to varying degrees.

Think of it this way, how much trouble did AT&T declaring the 3B2 to
be the "porting base" cause? This will no doubt be more pervasive, but
I believe it's similar.

	-Barry Shein, Boston University

rkh@mtune.ATT.COM (964[jak]-Robert Halloran) (03/25/88)

In article <7534@brl-smoke.ARPA> gwyn@brl.arpa (Doug Gwyn (VLD/VMB) <gwyn>) writes:
>In article <431@micropen> dave@micropen (David F. Carlson) writes:
>>What this
>>in effect will say is that there are preferred hardware platforms for UNIX,
>>the portable operating system, in that software vendors using popular ABI 
>>platforms will be able to sell more software that those using perhaps
>>technically superior but not a popular ABI would.  In turn, buyers will
>>bypass a technically superior solution in favor of a popular ABI option
>>solely because of binary interchange:  exactly why we rejected popular software
>>platforms to choose UNIX in the first place.
>
>I think you're drawing false conclusions, perhaps because you're working
>from false premises.  There is no reason that other architecture families
>could not also follow their own binary standards, and in fact there are
>efforts underway to do so for some families, for example the 386.  The
>only real significance for UNIX as such is that the porting base will
>change from 3B2 to SPARC, which may result in the sale of a few more
>Sun-4s to UNIX VARs but otherwise is of little consequence for most VARs.

I have seen in the trade press that Motorola has signed an agreement 
with AT&T to produce an ABI for the 68XXX family (XXX probably >= 020).
I am pretty sure a '386 ABI is in the works.

As far as porting bases, I am almost certain it will remain the 3B2 for
political reasons, but that the SPARC version will appear hot on its heels.



						Bob Halloran
=========================================================================
UUCP: {ATT-ACC, rutgers}!mtune!rkh		DDD: (201)251-7514 
Internet: rkh@mtune.ATT.COM			       evenings ET
USPS: 19 Culver Ct, Old Bridge NJ 08857
Disclaimer: These opinions are solely MINE; any correlation with AT&T
	policies or positions is coincidental and unintentional.
Quote: "There were incidents & accidents, there were hints & allegations"
		-- Paul Simon
=========================================================================

mash@mips.COM (John Mashey) (03/27/88)

In article <20913@bu-cs.BU.EDU> bzs@bu-cs.BU.EDU (Barry Shein) writes:
>
>Hey, I hear that UniSys has agreed to use the SPARC in future
>products.

Whoa! UniSys signed a license for SPARC, which means you pay $X to get
the design information.  Here is a useful quote from Computerworld,
March 14, 1988, page 7:

`"We have just signed a license agreement, but we haven't specifically
made a determination what to do.  We are looking at a high-performance
workstation and a large UNIX system," said Fred Meier, vice-president
of corporate program management at Unisys.
Unisys examined other reduced instruction set computing (RISC) architectures
and may yet commit to other RISC designs as well, Meier added.'
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	{ames,decwrl,prls,pyramid}!mips!mash  OR  mash@mips.com
DDD:  	408-991-0253 or 408-720-1700, x253
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

sl@van-bc.UUCP (pri=-10 Stuart Lynne) (03/28/88)

In article <431@micropen> dave@micropen (David F. Carlson) writes:
>There has been much UNIX news recently on the subject of ABI (Application Binary
>Interface) standard which AT&T along with Motorola, SUN and Intel are setting.
>If I understand the problem, as it is now each UNIX vendor for any machine
>they sell UNIX for is responsible for defining a binary protocol for things such
>as alignment (and/or packing), traps to kernel with associated arguments, etc.

>I believe what we all seek is a means of portability across machines lines
>without having to support N-machines to sell a product.  Parts of this are in
>place:  COFF has conversion routines for correctly ordering big-endian vs. 
>little-endian data sections.  Why can't a machine independent intermediate
>form be developed for UNIX solely to be translated into native binary on the
>target machine by a similar utility?  This form would have to be opaque 
>enough to discourage un-compiling but adaptable enough to allow for tight 
>native translation on any SystemV (and eventually POSIX) machine.  Perhaps
>a meta-assembler language such as the DoD CORE set as a possible portable 
>target code for PCC.  Or perhaps even some intermediate PCC form that a code
>generator fixes on the target.  The form should not preclude typical machine 
>dependent optimizations and data packing.
>

This is quite possible to do. The ill-fated p-System from Softech
Microsystems did it! (Still available from Pecan Software I believe.)

The p-System was a complete operating system, with development tools etc
which was ported across a large number of CPU's (808*, 68000, LSI-11, VAX,
6502, ...). 

The system consisted of a p-code interpreter and BIOS developed for each
architecture and a the rest of the system which was distributed as p-code
binary programs. The big/little endian problem, floating point constants,
problems where all solved. You could develop your program on any machine and
run the binary on any machine.

Also available was a Native Code Generator. This could be used by the *end
user* to *selectively* convert on a procedure basis parts of a binary p-code
file to the native code of his system. This had two results, the binary file
got bigger (and slower to swap) and faster to execute.

In theory at least the p-System allowed for complete portability while
retaining the ability to convert to a very fast native code if required on
the users machine. Unfortunately while this did work in practice as well
many other problems factored into the equation.

One of the big problems was that for machines like the IBM PC the *only* way
to get decent performance was to avoid the use of any operating system if
you could. Write directly to screen memory etc. 

The joke at the time (circa 1984) was that even though the p-System could
run on a dozen microprocessor's there where more IBM PC systems than all of
the other processors put together. This meant that anyone trying to make a
buck optimized his product highly for the MS-DOS/IBM PC market. He just
didn't care if it didn't run on the other 2 or 3 percent of the potential
market. Losing any sort of competive edge in the IBM PC market couldn't be
regained elsewhere. 

The moral is that this type of idea has been tried and technically it worked
well. But unless there are several potential markets that can be addressed
it just isn't worth the additional effort. In the case of Unix with Intel /
Motorola / SPARC it probably would be worth it, but I doubt if anyone is
giving to much thought to it.

PS. I'm not a Cobol person, but I have dim memories of a product blurb for
one of the more popular Cobol compilers doing something along these lines as
well. Cobol to object code which could be interpreted on any of the
supported systems.

-- 
{ihnp4!alberta!ubc-vision,uunet}!van-bc!Stuart.Lynne Vancouver,BC,604-937-7532

rkh@mtune.ATT.COM (964[jak]-Robert Halloran) (03/29/88)

In article <431@micropen> dave@micropen (David F. Carlson) writes:
>There has been much UNIX news recently on the subject of ABI (Application Binary
>Interface) standard which AT&T along with Motorola, SUN and Intel are setting.
>If I understand the problem, as it is now each UNIX vendor for any machine
>they sell UNIX for is responsible for defining a binary protocol for things such
>as alignment (and/or packing), traps to kernel with associated arguments, etc.

>I believe what we all seek is a means of portability across machines lines
>without having to support N-machines to sell a product.  Parts of this are in
>place:  COFF has conversion routines for correctly ordering big-endian vs. 
>little-endian data sections.  Why can't a machine independent intermediate
>form be developed for UNIX solely to be translated into native binary on the
>target machine by a similar utility?  This form would have to be opaque 
>enough to discourage un-compiling but adaptable enough to allow for tight 
>native translation on any SystemV (and eventually POSIX) machine.  Perhaps
>a meta-assembler language such as the DoD CORE set as a possible portable 
>target code for PCC.  Or perhaps even some intermediate PCC form that a code
>generator fixes on the target.  The form should not preclude typical machine 
>dependent optimizations and data packing.

It was my understanding that the idea of an ABI was that applications
for a SPECIFIC processor family, i.e. SPARC, '386, 68030, etc. would
be binary-compatible, to avoid the current problem where '286 Xenix
binaries won't run on a '286 system running Microport, a 68xxx binary
for the NCR Tower doesn't run on the <fill in some other 68K-based
Unix box>.  I'm using these only as examples; the idea is that application
vendors would only have to write ONE version for a given type of CPU
and have that run on any Unix box that uses that CPU.  I do NOT believe
that the idea is to have some 'intermediate-code' standard to be interpreted
on the various processor families.

(Any company names mentioned up above are purely for illustration;
 no flames about beating up on brand X, please.)

						Bob Halloran
=========================================================================
UUCP: {ATT-ACC, rutgers}!mtune!rkh		DDD: (201)251-7514 
Internet: rkh@mtune.ATT.COM			       evenings ET
USPS: 19 Culver Ct, Old Bridge NJ 08857
Disclaimer: These opinions are solely MINE; any correlation with AT&T
	policies or positions is coincidental and unintentional.
Quote: "There were incidents & accidents, there were hints & allegations"
		-- Paul Simon
=========================================================================

sample@chimay.cs.ubc.ca (Rick Sample) (03/29/88)

In article <1938@winchester.mips.COM> mash@winchester.UUCP (John Mashey) writes:
>
>Whoa! UniSys signed a license for SPARC, which means you pay $X to get
>the design information.  Here is a useful quote from Computerworld,
>March 14, 1988, page 7:
>
>`"We have just signed a license agreement, but we haven't specifically
>made a determination what to do.  We are looking at a high-performance
>workstation and a large UNIX system," said Fred Meier, vice-president
>of corporate program management at Unisys.
>Unisys examined other reduced instruction set computing (RISC) architectures
>and may yet commit to other RISC designs as well, Meier added.'
>-- 

Seems that there may be some disagreement within Unisys as to whether
or not they have decided to use Sparc.  Here's a quote from Info World
of March 14, page 1:

`For it's part, Unisys said it would begin shipping SPARC hardware
within a year and is counting on the architecture as the basis for
its future Unix hardware business.'

Rick Sample
Department of Computer Science
University of British Columbia

sample@cs.ubc.ca

gerard@tscs.UUCP (Stephen M. Gerard) (03/29/88)

In article <431@micropen> dave@micropen (David F. Carlson) writes:
>
>Why can't a machine independent intermediate
>form be developed for UNIX solely to be translated into native binary on the
>target machine by a similar utility?  This form would have to be opaque 
>enough to discourage un-compiling but adaptable enough to allow for tight 
>native translation on any SystemV (and eventually POSIX) machine.

This sounds good to me!

As a software developer, I can appreceiate the need for protection of source 
code.  However, it is possible through the usage of a disassembler, and some 
work, to generate usable assembly language source from a binary.  

A pseudo assembler interface could take advanatge of optimized library 
routines for each processor type and yield satisfactory results for most 
applications.  This type of standard would not discriminate against the less 
popular cpu's and could offer across the board compatability for UNIX systems 
ranging from desktop PC's to Cray's.  This would still give the software 
developer reasonable protection because without documentation and meaningful 
variable names, the ability to edit such code would be limited to about the 
same level as code generated by a good quality disassembler.  The plus side 
for the software developer, is that they now have a much larger market to sell 
to.  Software vendors for highly compute intensive applications could still 
offer native code releases for a greater cost.  

Any binary standard that limits you to one or two primary cpu vendors is 
better than no standard, but, in my opinion, limits your choices and is still 
too proprietary.  A solution that would be fair to all hardware vendors would 
generate greater momentum towards the standardization of UNIX and offer the 
end user with a greater variety of software solutions.  A good standard would 
allow the end user to purchase the best hardware solution for their needs 
without the fear that they will be limited in their choice of applications 
software.  After all, it is the end user that buys the hardware and software 
that fuels the industry with capitol to insure profits and future improvements.  
The way I look at it, everyone can be a winner!

The end user gets:
	+ An almost unlimited choice in software.

	+ Hardware vendor independant protection for their software investment.

	+ The ability to choose hardware by hardware specifications not
	  software availability.

	+ Lower pricing for popular applications due to increased sales
	  volume and competition.

	+ The ability to easily adapt default values to local conventions.

The software developer gets:
	+ An increased compatability for their software without the need to port
	  to multiple machines.

	+ Increased revenues due to increased sales.

	+ Ability to make improvements faster because programmers can
	  concentrate on the application as opposed to porting problems.

	+ The ability to compete in both the low and high end markets.

The hardware vendor gets:
	+ An increased library of applications that can be used to sell
	  their machines.

	+ A greater potential market created by the increased acceptance
	  of UNIX that would be created by such a standard.

Across the board binary compatabilty would give the UNIX marketplace a
shot in the arm and encourage small businesses to venture out of the MS-DOS
world and take a good hard look at UNIX.

I feel with true hardware independant binary (executable) compatability,
UNIX can easily become the operating system *standard*.

Of course these things can't happen overnight, after all, there are still
vendors out there selling systems with System V.1.

------------------------------------------------------------------------------
Stephen Gerard  -  Total Support Computer Systems  -  Tampa  -  (813) 876-5990
UUCP: gerard@tscs				  ...codas!usfvax2!tscs!gerard
US-MAIL: Post Office Box 15395 - Tampa, Florida  33684-5395

walter@garth.UUCP (Walter Bays) (03/31/88)

In article <185@tscs.UUCP> gerard@tscs.UUCP (Stephen M. Gerard) writes:
>A pseudo assembler interface could take advanatge of optimized library 
>routines for each processor type and yield satisfactory results for most 
>applications.  This type of standard would not discriminate against the less 
>popular cpu's and could offer across the board compatability for UNIX systems 
>ranging from desktop PC's to Cray's.  This would still give the software 
>developer reasonable protection because without documentation and meaningful 
>variable names, the ability to edit such code would be limited to about the 
>same level as code generated by a good quality disassembler.  The plus side 
>for the software developer, is that they now have a much larger market to sell 
>to.  ... [discussion of advantages to end-users and hardware vendors]

The idea sounds very good.  But perhaps you can explain why it's so
hard for a developer to provide multiple versions.  (This is not a
knock at developers; I simply don't know the answer.) If we believe in
source code standardization, all you have to do is recompile.  (Right?
:-) Is the problem access to the various machines to do the port?
Continued access for customer support?  Incompatibilities:  C-C,
SysV-SysV, BSD-BSD, SysV-BSD, X.Windows-X.Windows, X.Windows-NeWS?  Or
is the problem really distribution:  that you would have to produce
versions for M machines times N media formats, and your distributors
would have to stock that times S software houses?

------------------------------------------------------------------------------
Any similarities between my opinions and those of the
person who signs my paychecks is purely coincidental.
E-Mail route: ...!pyramid!garth!walter
USPS: Intergraph APD, 2400 Geng Road, Palo Alto, California 94303
Phone: (415) 852-2384
------------------------------------------------------------------------------
-- 
------------------------------------------------------------------------------
Any similarities between my opinions and those of the
person who signs my paychecks is purely coincidental.
E-Mail route: ...!pyramid!garth!walter
USPS: Intergraph APD, 2400 Geng Road, Palo Alto, California 94303
Phone: (415) 852-2384
------------------------------------------------------------------------------

edler@cmcl2.NYU.EDU (Jan Edler) (04/01/88)

There are always pros and cons to standardizing something in this
business.  The main advantages of a source compatibility standard (such
as SVID or POSIX) is that it increases portability and availability of
software.  A disadvantage is that programs conforming to the standard
can't take advantage of extensions provided by specific UNIX
implementations.  Of course, non-portable programs will use them and,
if successful and generally applicable, such extensions may be
incorporated into future revisions of the standard.

In the case of binary interfaces, the technical situation is much the
same for each processor type.  Portability of binary programs is
improved, but if a program is distributed in standard binary form, it
can't take advantage of extensions provided by specific UNIX
implementations.  I think it will be difficult to get such extensions
incorporated in new revisions of the binary standard if they don't
directly corespond to new features also being added to the source standard.

In both cases, I worry that the UNIX implementor will be discouraged
from exploring innovative extensions to the system.  If most
distributed programs adhere to the relevant standard (source or
binary), the incentive to innovate is reduced.  The benefits of source
compatibility are so great that I believe it is well justified.
However I'm not so convinced about the benefits of binary standards.

Consider that if I build a machine based on microprocessor X, and many
programs are available only in binary standard form, I might think
twice before investing significant effort into highly optimizing
compilers, since the binary programs won't benefit (since they were
presumably compiled on my competitor's machines, with inferior
compilers but larger market share).  And I might be disinclined to
invent new implementations of standard kernel functionality, such as

	getpid() in user mode
	time(), gettimeofday(), etc. in user mode
	pipes in user mode
	more efficient signal handling mechanisms

All of these enhancements (and many others) can be done in a way that
conforms to source standards, but none of them will benefit binary
standard programs.  Many of them aren't even generally applicable (e.g.
user mode pipes make more sense on multiprocessors than
uniprocessors).  I'm not saying the success of binary standards will
kill innovation, but I think it would have a detrimental effect.

Jan Edler
NYU Ultracomputer Project
edler@nyu.edu

gwyn@brl-smoke.ARPA (Doug Gwyn ) (04/01/88)

In article <24674@cmcl2.NYU.EDU> edler@cmcl2.UUCP (Jan Edler) writes:
>In both cases, I worry that the UNIX implementor will be discouraged
>from exploring innovative extensions to the system.

If the interface provided is sufficiently powerful and general,
then it can accommodate many such extensions.  E.g. transparent
networked file systems.  Face servers.  Processes as files.
Etc.  (Do all the good ideas come out of Murray Hill, or does
it only seem that way?)

It might be nice if every time someone decided to add a system
call to achieve some "added functionality", they were required
to identify one to be removed at the same time.  (I have heard
that this was an informal rule during early UNIX development;
I don't know if it's true but it should have been.)

lvc@tut.cis.ohio-state.edu (Lawrence V. Cipriani) (04/01/88)

In article <7599@brl-smoke.ARPA> gwyn@brl.arpa (Doug Gwyn (VLD/VMB) <gwyn>) writes:
>In article <24674@cmcl2.NYU.EDU> edler@cmcl2.UUCP (Jan Edler) writes:
>
>(Do all the good ideas come out of Murray Hill, or does
>it only seem that way?)
>

No, many good ideas come out of Columbus too!  Though they usually
wouldn't be of interest to non-telecommunications customers.


-- 
Larry Cipriani, AT&T Network Systems and Ohio State University
Domain: lvc@tut.cis.ohio-state.edu
Path: ...!cbosgd!osu-cis!tut.cis.ohio-state.edu!lvc (weird but right)

rsalz@bbn.com (Rich Salz) (04/02/88)

In comp.unix.wizards (<7599@brl-smoke.ARPA>), gwyn@brl.arpa) writes:
	(Do all the good ideas come out of Murray Hill, or does
	it only seem that way?)

Not at all.  It's just that other places don't have proponents who
are as vocal.
	/r$
-- 
Please send comp.sources.unix-related mail to rsalz@uunet.uu.net.

dave@sdeggo.UUCP (David L. Smith) (04/02/88)

In article <587@garth.UUCP>, walter@garth.UUCP (Walter Bays) writes:
> In article <185@tscs.UUCP> gerard@tscs.UUCP (Stephen M. Gerard) writes:
[ A call for a psuedo-code machine independent binary standard ]
> 
> The idea sounds very good.  But perhaps you can explain why it's so
> hard for a developer to provide multiple versions.  (This is not a
> knock at developers; I simply don't know the answer.) If we believe in
> source code standardization, all you have to do is recompile.  (Right?
> :-) Is the problem access to the various machines to do the port?

Well, there are enough problems just porting across machines and slightly
different version of Unix.  For example, today I started work on porting
some code from our ancient Parallel which runs an old release of SunOS to
our nifty, new ICON system.  Hardware-wise, the ICON is very nice, but their
implementation of UNIX has a few weird quirks, due to their hybridization
of BSD and Sys V.

At the heart of my code is a set of database routines that use flock() and
lockf() calls to keep things in order.  On SunOS lockf() has an enforced
mode, where the kernel will make routines that attempt to read/write a
locked section block until it is unlocked.  ICON choose to implement the
Sys V version instead which does not have an enforcement mode.  Since I was
depending on this enforcement in a few parts, I had to rewrite the routines
so that they do a check for locks before read/writing.  I had also fortunately 
encapsulated read/writes to the database into a few routines which I could 
modify easily.  However, if I had not and I had several thousand read/write 
statements into this database which depended upon enforcement it would be a 
serious problem, requiring quite a bit of man-power to track down and fix.  

This is not a _major_ problem and the port is going rather smoothly, but
this is just between two 680x0 based machines, both running BSD derivatives.
In fact, I could have taken my SunOS binaries over to the ICON and waved
a magic program over them and had them run on the ICON.  Had I done this,
however, the enforcement mode would not have worked and I would have ended
up with one royally screwed-up database with no clue as to how it got that
way.

The psuedo-code idea is nice, but I'd settle for full source code compatibility.
However, the only way to get this is to buy an identical machine with 
identical software and one of the reasons we all like Unix so much
is that it runs on so many different pieces of hardware supported by so many
different vendor.  If each vendor were to implement "standard" Unix and only
"standard" Unix, what would be the point of having different vendors?  However,
with different vendor extensions the only way to write portable code is to
write to the minimal standard, which defeats the purpose of having extensions.

What's my point?  We cannot have "standard" Unix running on "standard" hardware
unless we are willing to accept stagnation.  View the microcomputer marketplace.
It is standardized around the IBM PC/MS-DOS standard and has been _stuck_ there
for four years without significant change.  "Standard" Unix and hardware
standardized around SPARC will force the same stagnation upon the Unix and
mini-computer market.  In order for us to have new and niftier products to
play with, we must accept the burdens that come with new and different things;
namely some hard work in getting them working.  Unix, as of 1988, is not the
ultimate in operating systems.  What AT&T and Sun will produce will not be
either.  I hope what I will be working on in the year 2000 will be significantly
different from what I'm working on today, however I doubt it.  We seem to
be narrowing in on a peak for computers which will not substantially change
for some time.

(Sorry for the soapbox, but I needed to say it)
-- 
David L. Smith
{sdcsvax!jack,ihnp4!jack, hp-sdd!crash, pyramid, uport}!sdeggo!dave
sdeggo!dave@amos.ling.edu 
Sinners can repent, but stupid is forever.

gwyn@brl-smoke.ARPA (Doug Gwyn ) (04/03/88)

In article <186@sdeggo.UUCP> dave@sdeggo.UUCP (David L. Smith) writes:
>At the heart of my code is a set of database routines that use flock() and
>lockf() calls to keep things in order.  On SunOS lockf() has an enforced
>mode, where the kernel will make routines that attempt to read/write a
>locked section block until it is unlocked.  ICON choose to implement the
>Sys V version instead which does not have an enforcement mode.

I don't know the specifics of the ICON system, but I do know about
System V file/record locking.  The current release of UNIX System V
provides improved locking control via a subset of the fcntl() system
call.  This appeared just after SVR2.0; it includes "read" (shared)
and "write" (exclusive) advisory (cooperative) locks on any block of
bytes or the whole file.  More recently, mandatory locking has been
added; it is enabled on a per-file basis by setting the Set-GID bit
on a file that does not have the Group-Execute bit set.  This causes
the same fcntl() operations to be taken as mandatory locks instead of
cooperative locks.

Check your system documentation to see if any of this has been
provided.  It's well enough designed that IEEE Std 1003.1 decided
to pick up most of it for the POSIX standard.

allbery@ncoast.UUCP (Brandon Allbery) (04/05/88)

As quoted from <1697@van-bc.UUCP> by sl@van-bc.UUCP (pri=-10 Stuart Lynne):
+---------------
| PS. I'm not a Cobol person, but I have dim memories of a product blurb for
| one of the more popular Cobol compilers doing something along these lines as
| well. Cobol to object code which could be interpreted on any of the
| supported systems.
+---------------

Ryan-McFarland's RM/COBOL just about does this; you can *cross-compile* to
any machine from any machine, but they couldn't go to one single format
because the original format depended on byte ordering.  They could deal with
just about everything at this point, at the expense of losing upward com-
patibility with old programs; they may have done it with Cobol-85, I don't
know.

(For those interested, nitty-gritty details follow.)

The RM/COBOL compiler supports 4 flags controlling cross-compilation:

* byte ordering
* object record size 254 vs. 256 bytes (this one has TRSDOS written all over
	it! -- especially since the manual says "Z80 systems only")
* ASCII vs. EBCDIC
* inhibit new features of compiler (the compiler that permits cross-compiling
	is upward compatible with the old one, setting this flag turns off
	the new commands)

The only one that is relevant that *can't* be dealt with at runtime is ASCII
vs. EBCDIC -- and even that one could be if the first block of the runtime
were a character translation table.  The 254-vs.-256 distinction could be
dropped; who uses a TRS-80 Model I/III these days?  And it doesn't apply to
the later RM/COBOL versions (2.x) anyway, so Model IV wouldn't be affected.
Byte ordering, of course, is arbitrary in an interpreted system.  The disable-
new-features one simply flags as errors any V2 code, since the V1 executive
doesnt know the interpreter opcodes for the commands added in V2; not relevant
here.

If interpreters were faster (or threaded interpreters were easier for humans
to program), hardware independence would be trivial.
-- 
	      Brandon S. Allbery, moderator of comp.sources.misc
       {well!hoptoad,uunet!hnsurg3,cbosgd,sun!mandrill}!ncoast!allbery