[comp.arch] Shared libraries

lamaster@pioneer.arpa (Hugh LaMaster) (01/01/70)

In article <4578@oberon.USC.EDU> blarson@skat.usc.edu (Bob Larson) writes:

>
>These new shared libararies are a special case of EPFs.  EPFs are
>programs that are dynamicly linked into the user's address space (at a
>segment boundary) and are shared between users.  EPFs have been around
>since Primos 19.0, but were not documented and fully supported till
>Primos 19.4.

(repetitive disclaimer: virtual memory segments are NOT the same as 8086 or
PDP-11 segments).

I would add that the two level page table hardware and two level virtual
memory structure, permitting shared virtual memory segments, works so well
that it is a disappointment to me that not everyone does it.  Another possible
implementation of shared libraries using shared segments: map library code
directly onto a shared virtual memory segment.  Shared pages of the same
dataset need only one copy; if a different, (older) version of the dataset is
specified as a library, its pages are used.  User programs can share pages
without intervention of a system manager; nothing needs to be "installed".
Compare to what VMS gives you with global sections...




  Hugh LaMaster, m/s 233-9,  UUCP {topaz,lll-crg,ucbvax}!
  NASA Ames Research Center                ames!pioneer!lamaster
  Moffett Field, CA 94035    ARPA lamaster@ames-pioneer.arpa
  Phone:  (415)694-6117      ARPA lamaster@pioneer.arc.nasa.gov

(Disclaimer: "All opinions solely the author's responsibility")

roy@phri.UUCP (Roy Smith) (09/22/87)

In article <2067@sfsup.UUCP> shap@sfsup.UUCP (J.S.Shapiro) gives a pretty
good introductory rundown on shared libraries.

	One thing that has always bothered me about shared libraries (or
dynamic loading, if you prefer) is the problem of backward compatability.
It's not too uncommon that a new version of a program comes along, but some
people still want to use the old version, for whatever reason.  Let's say a
new version of scanf comes along which follows some standard a bit closer
than the old one.  This is arguably a good thing, but what if I just happen
to have a program which depends on the old (albeit wrong) behavior of the
scanf library function.

	On a compile-time library loading system, all I would have to do
would be to keep around a copy of the executable loaded with the old
library and I'd be happy.  We have more than one 3.0 binary running on our
SunOS-3.2 systems for exactly this reason.  What do I do on a shared
library system when scanf changes out from under me and breaks my program?
-- 
Roy Smith, {allegra,cmcl2,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

jesup@mizar.steinmetz (Randell Jesup) (09/23/87)

In article <2903@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>	One thing that has always bothered me about shared libraries (or
>dynamic loading, if you prefer) is the problem of backward compatability.
...
>SunOS-3.2 systems for exactly this reason.  What do I do on a shared
>library system when scanf changes out from under me and breaks my program?
>Roy Smith, {allegra,cmcl2,philabs}!phri!roy

	Take a page from the Amiga Exec (which uses shared libraries for
almost everything.  When opening the shared library, tell it what version
of the library you want.  (And maybe whether you'll accept a newer
version).  This also solves the problem of running a binary that want's
version 33 of a library on a system that is only up to 32, for example.
	Randell Jesup  (Please use one of these paths for mail)
	sungod!jesup@steinmetz.UUCP (uunet!steinmetz!{sungod|crd}!jesup)
	jesup@ge-crd.ARPA

gwyn@brl-smoke.ARPA (Doug Gwyn ) (09/23/87)

In article <2903@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>What do I do on a shared
>library system when scanf changes out from under me and breaks my program?

Obviously, you fix your program.  You need to do that anyway
the next time you recompile it (e.g. during routine maintenance).

This use of shared libraries depends on there being a stable
interface definition for the shared routines.  If your code
depends on anything not guaranteed by the interface definition,
then you were asking for trouble all along.

jss@hector.UUCP (Jerry Schwarz) (09/23/87)

In article <2903@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>
>	One thing that has always bothered me about shared libraries (or
>dynamic loading, if you prefer) is the problem of backward compatability.
>It's not too uncommon that a new version of a program comes along, but some
>people still want to use the old version, for whatever reason.  

You have to relink with the unshared version of the old library. My
understanding is that the tools that build a shared library on System
V build an unshared version at the same time. In most circumstances I
would expect the unshared version to be distributed with the shared
version.

Jerry Schwarz

guy%gorodish@Sun.COM (Guy Harris) (09/24/87)

> >It's not too uncommon that a new version of a program comes along, but some
> >people still want to use the old version, for whatever reason.  
> 
> You have to relink with the unshared version of the old library.

You do in some, but not all, implementations.  An earlier article indicated
that the Amiga implementation permits you to specify a version number when the
library is opened (does this mean programs have to explicitly open shared
libraries, i.e., they have to know that they're using a shared library?).

See "Shared Libraries in SunOS" from the last USENIX proceedings for another
scheme for handling this problem.  The scheme described therein permits
multiple versions of the library to be resident; when the interface changes in
a way that is expected to break old programs, the major version number is
increased, so that programs depending on an earlier major version of the
library won't bind to a later version.

Of course, this requires you to keep old versions of the library around, but
you can't have everything; if this causes a disk-space problem, you might have
to link with an unshared version of the library.
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

roy@phri.UUCP (09/24/87)

In <2903@phri.UUCP> I asked:
> What do I do on a shared library system when scanf changes out
> from under me and breaks my program?

In <6461@brl-smoke.ARPA> gwyn@brl.arpa (Doug Gwyn (VLD/VMB) <gwyn>) replied:
> Obviously, you fix your program. [...]  If your code depends on anything
> not guaranteed by the interface definition, then you were asking for
> trouble all along.

	I respect Doug Gwyn.  He seems to know what he's talking about more
than most people, so I don't flame him lightly, but that's the kind of
pompus, if-you-can't-do-it-right-don't-do-it-at-all answer that gives
computer science a bad name.  In the same category as "if it doesn't do X,
then it's not C".  Yes, it's nice to follow the rules and write code
according to the interface definition.  But where does that leave the poor
schmuck who didn't follow the rules two years ago when he was writing his
very grungey but very useful and productive program?  In the dirt where he
belongs?  Sorry, it doesn't work that way in the real world.  Maybe we
don't have the source any more, or never did.  Maybe it's just more trouble
than it's worth to fix the code.

	I don't run a computer science research center, I run a production
machine.  My users expect to get their work done.  If it runs and gives
them the right answers, that's more important than following the rules.
Many of the people who write production programs here are biologists, not
professional programmers.  If one of them has a program which has been
working for N years and all of a sudden it breaks because I put up a new
version of Unix, he doesn't want to know that the new version is better and
the reason his program won't run any more is because he didn't follow the
interface definition.  As far as he's concerned, I broke his program by
upgrading the OS and he's pissed at me because he can't get his work done.
Guess what?  I agree with him.

	Let me give you an example.  One of our big users has a 7000 line
Fortran program.  The code is 15 years old but it still works, and for what
it does, there still isn't anything better.  The classic dusty deck.  He
made a few minor modifications to it involving calling system(3F).
Everything worked fine for a year or so running under SunOS-3.0, but then
we upgraded to 3.2.  Guess what, in 3.2 (and, I understand, in 3.4 as
well), the Fortran system() routine is broken.  Now it's not even a case of
not following the rules, but simply that the new library routine has a bug
which the old one didn't.  Until we tracked the bug down to the system()
call and Sun was able to supply the workaround, our solution was (you
guessed it) to continue program development on a 3.0 machine and run the
binaries on the 3.2 machines (which had better floating point hardware).

	It just so happens that Sun was able to supply a workaround which
allowed us to link using the new (broken) library, but imagine what would
have happened if Sun was using a dynamic library loading scheme and there
was no workaround other than linking with the old library.  I could have
shaved my head and marched up and down the halls chanting "follow the
interface definition" till I was blue in the face and it wouldn't have done
a damn bit of good.  I supposed I could have uninstalled the 3.2 upgrade
and refused to pay for it, but that's not what we wanted to do.

	My background is engineering.  In engineering school they teach you
to follow the rules and design things according to sound theoretical
principles.  They also teach you that when push comes to shove, it's
sometimes better to break the rules and put a bag on side rather than not
have it work at all.  Sure, design it so it works like it's supposed to,
but just in case, leave the hooks in so you have someplace to hang that bag
from if you ever have to.
-- 
Roy Smith, {allegra,cmcl2,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

des@jplpro.JPL.NASA.GOV (David Smyth) (09/24/87)

In article <28957@sun.uucp> guy%gorodish@Sun.COM (Guy Harris) writes:
>> >It's not too uncommon that a new version of a program comes along, but some
>> >people still want to use the old version, for whatever reason.  
>> 
>> You have to relink with the unshared version of the old library.
>
>You do in some, but not all, implementations.  An earlier article indicated
>See "Shared Libraries in SunOS" from the last USENIX proceedings for another
>Of course, this requires you to keep old versions of the library around, but
>you can't have everything; if this causes a disk-space problem, you might have
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>to link with an unshared version of the library.
>	Guy Harris

How is COPYING the old shared libraries into executables which need 
them ANY savings in disk usage?  It seems it will be a DEAD LOSS:
core (bigger executable images); virtual memory (it gets used up even
if paged out);  AND disk space (the executable file gets bigger for EVERY
program which needs the unshared library).

Why EVER have unsharable libraries???  

Why EVER have libraries specifically linked to an executable???
	a) If it is an application which makes repeated calls
	   to a library, the FIRST invocation may be slower, but
	   all following invocations can be VERY CLOSE to the same
	   speed [Message/Object Programming, Brad J. Cox, see 
	   table 1].
	b) Speed Critical Applications probably want to be vectorized,
	   and I would think reducing the competition for core via
	   shared libraries would be a BIG win if swapping is reduced
	   even a little bit (I don't know much about vectorized
	   algorithms, I only work on these archaic Suns, Vaxen, and
	   such Von Nueman rubbish :^) ).

daveh@cbmvax.UUCP (Dave Haynie) (09/24/87)

in article <28957@sun.uucp>, guy%gorodish@Sun.COM (Guy Harris) says:
> Xref: cbmvax comp.arch:2266 comp.unix.wizards:4354 comp.os.misc:224
> 
>> >It's not too uncommon that a new version of a program comes along, but some
>> >people still want to use the old version, for whatever reason.  
>> 
>> You have to relink with the unshared version of the old library.
> 
> You do in some, but not all, implementations.  An earlier article indicated
> that the Amiga implementation permits you to specify a version number when the
> library is opened (does this mean programs have to explicitly open shared
> libraries, i.e., they have to know that they're using a shared library?).

Well, somewhere in each executable you'll find an explicit request to open
and subsequently close any shared libraries.  The is very important for
Amiga with small amounts of memory, since it allows the OS to load on demand
any disk-based libraries, and to dump any that are no longer used.  Currenly
all Amiga system functions are available only as shared libraries, and 
the programmer normally includes a single OS call to open each, followed by
one to close each.  Of course, if unshared libraries were available, a
linker could link in NOP open/close requests and maybe as much as 300K-400K
of library code into your executable.  Now, when you're compiling in C
using UNIX compatibility calls or some such, there's UNIX style start-up
code that opens everything you'd need, so in that case the programmer has
probably no idea what's shared and what isn't.

> See "Shared Libraries in SunOS" from the last USENIX proceedings for another
> scheme for handling this problem.  The scheme described therein permits
> multiple versions of the library to be resident; when the interface changes in
> a way that is expected to break old programs, the major version number is
> increased, so that programs depending on an earlier major version of the
> library won't bind to a later version.

As far as I know no one's done this kind of thing on the Amiga yet, but I think
a small modification to the OpenLibrary() function could allow for multiple
generations to be kept around, if necessary.  I think what's done now is that
if a function is changed enough to cause most things to break, it would appear
as a new function, while the old one is kept in place for backward 
compatibility.

> Of course, this requires you to keep old versions of the library around, but
> you can't have everything; if this causes a disk-space problem, you might have
> to link with an unshared version of the library.

Or old functions around in new libraries.

> 	Guy Harris
> 	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
> 	guy@sun.com

-- 
Dave Haynie     Commodore-Amiga    Usenet: {ihnp4|caip|rutgers}!cbmvax!daveh
"The A2000 Guy"                    PLINK : D-DAVE H             BIX   : hazy
     "God, I wish I was sailing again"	-Jimmy Buffett

henry@utzoo.UUCP (Henry Spencer) (09/24/87)

> ... What do I do on a shared
> library system when scanf changes out from under me and breaks my program?

Whisper the magic phrase:  "search paths".  There is no *inherent* reason
why you can't keep multiple versions around and have some way to say "run
this program with the old library, please".  One can even envision doing
it automatically with some sort of date-stamp system:  the loader finds
the newest library version that predates the program, or failing that, the
oldest version remaining.
-- 
"There's a lot more to do in space   |  Henry Spencer @ U of Toronto Zoology
than sending people to Mars." --Bova | {allegra,ihnp4,decvax,utai}!utzoo!henry

guy%gorodish@Sun.COM (Guy Harris) (09/25/87)

> How is COPYING the old shared libraries into executables which need 
> them ANY savings in disk usage?

If the sum of the disk size differences between the executables in question as
linked with the shared library and the executables in question as linked with
the unshared library is less than the size of the old shared library, it's
obviously going to save you disk space not to keep the old library around.  I
will not rule out this possibility out of hand.

> It seems it will be a DEAD LOSS: core (bigger executable images);

What do you mean by "core" here?  Do you mean "physical memory"?  Yes, shared
libraries will share physical memory better, but that has nothing to do with
the size of the executable image *per se*.  The smaller size of the image is,
in part, misleading; in order for the image to *run* it still needs to have the
shared libraries added in.

> virtual memory (it gets used up even if paged out);

Umm, images built without shared libraries will usually require *less* virtual
memory than ones built with shared libraries; the one built without shared
libraries doesn't include a whole pile of routines that it doesn't use.  That
stuff may be paged out, but you said "virtual memory", not "physical memory"
here.

> Why EVER have unsharable libraries???  

So that you can have a program with which to to back up to an old version of a
shared library if a new minor version has a bug in it (or if the file
containing the shared library got trashed, or if it got unlinked, or....)?
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

tim@ism780c.UUCP (09/25/87)

In article <2903@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
< SunOS-3.2 systems for exactly this reason.  What do I do on a shared
< library system when scanf changes out from under me and breaks my program?

On System V shared libraries, the path names of the shared libraries are
in the header of the a.out file.  It shouldn't be too hard to provide
a tool that allows you to change these.

Then, when you find out that a library update breaks your program, you
could obtain a copy of the old library, and fix your a.out to use
this.
-- 
Tim Smith, Knowledgian		{sdcrdcf,uunet}!ism780c!tim
				tim@ism780c.isc.com
"Oh I wish I were Matthew Wiener, That is who I truly want to be,
 'Cause if I were Matthew Wiener, Tim Maroney would send flames to me"

gew@dnlunx.UUCP (Weijers G.A.H.) (09/25/87)

In article <28957@sun.uucp>, guy%gorodish@Sun.COM (Guy Harris) writes:
> Of course, this requires you to keep old versions of the library around, but
> you can't have everything; if this causes a disk-space problem, you might have
> to link with an unshared version of the library.

How can this solve the disk space problem? You'll have *at*least* one
copy of the old version around, linked to the unconverted application,
and possibly many copies.
-- 
| Ge' Weijers                            |  Disclaimer: the views expressed |
| PTT Dr. Neher Laboratories             |  are usually entirely my own,    |
| Leidschendam, the Netherlands          |  not my employers'. I don't own  |
| uucp:  ...!mcvax!dnlunx!gew            |  a cat to share them with.       |

gwyn@brl-smoke.ARPA (Doug Gwyn ) (09/25/87)

In article <2906@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>My background is engineering.  In engineering school they teach you
>to follow the rules and design things according to sound theoretical
>principles.  They also teach you that when push comes to shove, it's
>sometimes better to break the rules and put a bag on side rather than not
>have it work at all.  Sure, design it so it works like it's supposed to,
>but just in case, leave the hooks in so you have someplace to hang that bag
>from if you ever have to.

I hope they also taught you how to migrate forward with progressing
technology.  I, too, come from a "production shop" background, but
we knew enough to keep our sources around and (in a non-shared library
environment) would recompile everything at regular intervals, simply
to ensure that our applications matched the current state of the
system software.  Naturally, we had test procedures..

I don't see that the situation with shared libraries is appreciably
different from that without them, other than that your application
will breaker sooner (when the new library is installed) rather than
later (when you have to rebuild the application for some reason, such
as making a small change).  The causes for breakage are the same
either way.

I stress the library interface specifications because that is the
"treaty point" on which the application programmer and the system
implementor must agree.

As to the people with dusty decks not being in a position to maintain
their own code, I can't sympathize very much.  Most code like that
that I have seen has given the appearance of "working" but in fact
was producing various degrees of garbage.  I don't know what the
solution to this is, other than to have more qualified professional
programmers available to help others exploit computers more robustly
and effectively.  It's a problem here (BRL), too.

crc@abvax.icd.ab.com (Clive R. Charlwood) (09/25/87)

in article <2903@phri.UUCP>, roy@phri.UUCP (Roy Smith) says:
> Xref: abvax comp.arch:1448 comp.unix.wizards:2522 comp.os.misc:138
> dynamic loading, if you prefer) is the problem of backward compatability.


I would hope that any shared library system would give some ability
for each executable to link into specific revisions of each library
routine.

The author of the code will probably have to go and update the code
to make it work with the latest and greatest sometime.

BUT you can't get into the situation where whenever a library changes
all installed software has to be retested immediatly.


-- 
		Clive R. Charlwood @ Allen-Bradley Company
		Industrial Computer Division
		747 Alpha Drive, Highland Heights, OH   44143
		...!{decvax,masscomp,pyramid,cwruecmp}!abic!crc

earl@mips.UUCP (Earl Killian) (09/26/87)

In article <2906@phri.UUCP>, roy@phri.UUCP (Roy Smith) writes:
[lots of words about programming in the real world]

I agree with what Roy Smith wrote with respect to the real world.
But, shared libraries aren't bad for his world.  I think they make
life simpler.  At least if done right (I don't think System V shared
libraries qualify, but I don't know).  On Multics, it works the way
Henry Spencer expects; to fix a broken function, you simply put a
replacement in the search path, and the problem is fixed.  You don't
need to relink everything that's been worked in since the bug first
appeared (finding all the programs linked with the bad system() in
your example might be a real chore, depending on when the bug is
discovered).

Bug fixes, workarounds, and new features are major uses of Multics
dynamic linking.  Consider something trivial on Multics and impossible
on vanilla Unix.  I login to a system over the arpanet from far away.
It prints out time in the wrong time zone.  Suppose there's no option
to specify a timezone.  I just put a new binary to ascii time
conversion function in my search path, and I'm happy.  On Unix I'd
have to relink all the appropriate programs in /usr (not easy if I
don't have the source).

peter@sugar.UUCP (Peter da Silva) (09/26/87)

In article <28957@sun.uucp>, guy%gorodish@Sun.COM (Guy Harris) writes:
> that the Amiga implementation permits you to specify a version number when the
> library is opened (does this mean programs have to explicitly open shared
> libraries, i.e., they have to know that they're using a shared library?).

Yes, but this can be included in the startup code (and in fact is, for the most
common shared library "dos.library"). Shared library calls are made by loading
the base address of the library jump table into A6 and doing an indirect call
off it.
-- 
-- Peter da Silva `-_-' ...!hoptoad!academ!uhnix1!sugar!peter
--                 'U`  Have you hugged your wolf today?
-- Disclaimer: These aren't mere opinions... these are *values*.

shap@sfsup.UUCP (J.S.Shapiro) (09/26/87)

In article <443@devvax.JPL.NASA.GOV>, des@jplpro.JPL.NASA.GOV (David Smyth) writes:
> In article <28957@sun.uucp> guy%gorodish@Sun.COM (Guy Harris) writes:
> 
> How is COPYING the old shared libraries into executables which need 
> them ANY savings in disk usage?  It seems it will be a DEAD LOSS:
> core (bigger executable images); virtual memory (it gets used up even
> if paged out);  AND disk space (the executable file gets bigger for EVERY
> program which needs the unshared library).
> 

I think you missed the idea. A shared library is usually not a single
monolithic object, and the incompatibility with an old version is usually
temporary. Since the libraries only provide for *unresolved* references, it
suffices as a temporary fix to haul only the problematic object out of the
old library for inclusion in your code, and continue to use the new shared
library. That is, you don't have to use *all* of the old library.

Two things mitigate this. First, changes in libraries are almost always
bug fixes or compatible with the documentation. If you depend on a bug,
you really *do* deserve what you get, particularly given that most
companies that produce compilation systems provide workaround lists.
If you have not been following the docs, you also deserve what you get.

The other possible case is a major upheaval in the compilation system, as
will tend to happen with the forthcoming batch of ANSI C compilers in the
market. ANSI has changed C a lot. In these cases you need to do substantial
rework anyway, and linking with the old objects is a way to get a working
interim product to your customers while you provide a real solution. Yes,
in the short term it is a lose from the standpoint of space if you have to
do this for a lot of routines, however, disk is cheaper than
nonproductivity, and on a temporary basis most customers won't object.

> Why EVER have unsharable libraries???  

There are many architectures out there which don't support shared libraries
(particularly position independent ones) gracefully. Having a shared
library means reseving a good sized chunk of your address space for each
shared library you anticipate, and it becomes a fairly difficult
administrative problem to parcel out chunks of the address space to your
VARs.

On many architectures, position independent code means a performance hit of
20% (or more), and only recently have advances in hardware technology made
this acceptable. It's a tradeoff. Many architectures can't do shared
libraries at all, and any compilation system that wants to deal with these
architectures *as well as* the newer architectures faces a difficult
problem.

> Why EVER have libraries specifically linked to an executable???

See above, then I'll deal with the specific claims below:

> 	a) If it is an application which makes repeated calls
> 	   to a library, the FIRST invocation may be slower, but
> 	   all following invocations can be VERY CLOSE to the same
> 	   speed [Message/Object Programming, Brad J. Cox, see 
> 	   table 1].

Well, this isn't really a win. There are basically two techniques for
making this hack work. These are: (1) completely relocate the executable
when you load it into core to execute it (2) come up with a backpatching
scheme such that the first time you call a function from any given place,
some intermediate glue examines the CALL statement and backpatches the
*real* function pointer into place.

Option (1) is clearly debatably good - that can be a lot of relocation, and if
your binary is big the relocation takes a long time. Whether or not this is
a good choice depends on how many times you need to fire up the binary,
how big it is, and how much disk space it saves you to use the shared
libraries. It makes doing paging efficiently hard (see below).

Option (2) is very difficult to do on many architectures, requires careful
code generation, and prevents taking advantage of span-dependent
instructions for calls. This has it's own impact, and it is potentially
sizeable.

> 	b) Speed Critical Applications probably want to be vectorized,
> 	   and I would think reducing the competition for core via
> 	   shared libraries would be a BIG win if swapping is reduced
> 	   even a little bit (I don't know much about vectorized
> 	   algorithms, I only work on these archaic Suns, Vaxen, and
> 	   such Von Nueman rubbish :^) ).

Consider that both methods require modifying text pages, and this means
that you have to reserve space for these pages in your paging area.
This prevents you from paging in from the text portion of the original program
file. Shared libraries tend to be small sets of core facilities. Chances
are many more pages will reference them than there are in the shared
library, and this hurts you in swap area. Note that to make this work you
need *writable* shared text, which opens a whole other can of worms.

There is a technique which can be used to avoid all this which is to have
an indirection table and a directory in each library, or a
well-known-globals list, as someone suggested, but this implies a
remarkable performance hit.

In short, it ain't all as easy as it sounds, which is why most compilation
systems still don't support it at all.

And that is why you want non-shared libraries.

Jon Shapiro
AT&T Information Systems

ka@uw-june.UUCP (09/27/87)

> What do I do on a shared library system when scanf changes out from
> under me and breaks my program?

You recompile your program to use the old version of the shared library
instead of the current one.  Sure, it would be nice not to have to
recompile these programs, but in the case of nonshared libraries you
have to recompile also.  With nonshared libraries, you would have to
recompile all the programs which were broken due to the bug in scanf.
With shared libraries, on the other hand, you have to recompile the
programs that depend upon the bug in scanf.  However, these programs
should really be fixed, which will force them to be recompiled anyway.
				Kenneth Almquist
				uw-june!ka	(New address!)

stachour@umn-cs.UUCP (09/29/87)

In article <8650@utzoo.UUCP>, henry@utzoo.UUCP (Henry Spencer) writes:
> > ... What do I do on a shared
> > library system when scanf changes out from under me and breaks my program?
> 
> Whisper the magic phrase:  "search paths".  There is no *inherent* reason
> why you can't keep multiple versions around and have some way to say "run
> this program with the old library, please".

That's why Multics has had search-paths since day-1.  That's also why it
has the phrase "-referencing_dir" in its rules, so that programs pointed
at an old version of a dynamically-linked shared library thereby
getting the first of a set of routines from that particular version
of the library will get the rest of the set that match.

(Multics also has done up-referencing and down-referenceing of data,
argument-interogation, compatible-incompatible replacements,
system_library_obsolete, and other ways to handle multiple versions
of a program, but that's another 20-year old story that few read,
because it works so well no-one needs to read or change the code.)



Paul Stachour                 Why don't computer people seem to know
Stachour@HI-Multics.arpa      Computer History? It isn't like there
stachour at umn-cs.equ        is 1000 or even 50 years of it!

hammond@faline.bellcore.com (Rich A. Hammond) (09/29/87)

In article <> shap@sfsup.UUCP (J.S.Shapiro) writes:
 [much discussion of shared libraries being dynamically modified] ...
>
>There is a technique which can be used to avoid all this which is to have
>an indirection table and a directory in each library, or a
>well-known-globals list, as someone suggested, but this implies a
>remarkable performance hit.

I assume by this he means avoiding modifying the shared library or the
binary image of the process.

>In short, it ain't all as easy as it sounds, which is why most compilation
>systems still don't support it at all.
>
>And that is why you want non-shared libraries.

I don't know how you come to the conclusion that it is a "a remarkable
performance hit" to support statically linked shared libraries.  On the VAX,
M680x0 and National 320X0 families each subroutine call requires an extra
jump instruction from the indirection table to the actual code.  You can
link things so that the globals are all adressed exactly as they would be
in a normal (non-shared) library executable, so the only "hit" is the cost
of the extra jump.  From my memory of when I worked on this, the most
frequently called items in the shared library are the assembly language
glue to system calls and printf/sprintf/...  All of these take enough
time that the overhead of a single extra jump is not going to be a big loss.

Compare this to the savings from sharing the library, i.e. less space used
in memory, less I/O (PAGE FAULTS?) to handle as the process runs, ...

As the people from SUN might point out, the really big wins for shared
libraries are when you have large libraries, such as the window system,
or when your I/O is painfully slow relative to the CPU speed, i.e. a
25 MHz 68020 using a disk across the ethernet.  How many jumps can a
68020 do while waiting for just 1 block to come across the ethernet?

And that is why you want shared libraries.

Rich Hammond	Bell Communications Research	hammond@bellcore.com

elg@usl.UUCP (09/29/87)

in article <28957@sun.uucp>, guy%gorodish@Sun.COM (Guy Harris) says:
>> >It's not too uncommon that a new version of a program comes along, but some
>> >people still want to use the old version, for whatever reason.  
>> You have to relink with the unshared version of the old library.
> You do in some, but not all, implementations.  An earlier article indicated
> that the Amiga implementation permits you to specify a version number when the
> library is opened (does this mean programs have to explicitly open shared
> libraries, i.e., they have to know that they're using a shared library?).

The Amiga operating system consists of a tiny multi-tasking
message-passing kernel, various system processes to handle I/O, the
mouse, etc., and a whole slew of libraries, all of which are shared,
all of which are loaded upon demand (i.e. 'opened'). So yes, programs
have to explicitly open shared libraries. However, Amiga "C" compilers
handle that for stdio and other "vanilla" stuff, it's just when you're
worrying about windowing and other down'n'dirty machine-dependent
stuff that you need to explicitly worry about opening and closing
libraries.

Needless to say, the Amiga OS is not Unix, for one thing, Unix isn't
going to run on a 512K machine with no MMU and two floppies. So they
don't have to worry about backward compatibility with Version 7 &
earlier programs written 15 years ago like Sun does :-).

--
Eric Green  elg@usl.CSNET       from BEYOND nowhere:
{ihnp4,cbosgd}!killer!elg,      P.O. Box 92191, Lafayette, LA 70509
{akgua,killer}!usl!elg        "there's someone in my head, but it's not me..."

schwartz@gondor.psu.edu (Scott E. Schwartz) (09/29/87)

Here is an antecdote that describes the ups and downs of runtime linking
of shared libraries (which is often how it is done).

In revision 20 of Primos Prime introduced dynamically linkable
shared libraries, which they call EPF's (for Executable Program Formats).
Previously, privliged software could be shared, but not user code.
Also, you had to dedicate a chunk of address space to the library.  In
rev 20 they made the whole system more general, and allowed user code
to be dynamically linked at runtime to a single copy of the library.
The increase in functionality was very large: it became easy to do lots
of things that had been hard in rev 19, and executables became very
small.  On the other hand, OS performace declinded dramatically.  A
prime 9950 that could comfortably support 80 users at rev 19 became
sluggish with 30 users at rev 20.  Now, there was probably a fair
amount of revision in the OS, so it's hard to claim that the new
style EPFs are the cause of the performance loss, but it is 
possible. 


#disclaimer  "Just my uninformed opinion."


-- Scott Schwartz            schwartz@gondor.psu.edu

guy%gorodish@Sun.COM (Guy Harris) (09/30/87)

> > if this causes a disk-space problem, you might have to link with an
> > unshared version of the library.
> 
> How can this solve the disk space problem? You'll have *at*least* one
> copy of the old version around, linked to the unconverted application,
> and possibly many copies.

No, you have only a copy of the routines from the old version that the
unconverted application *actually uses*.  If the difference between the
increase in size of the application due to having a copy of those routines
bound in, and the size of the old library, is large enough, this *will* save
disk space.
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

blarson@skat.usc.edu.UUCP (09/30/87)

In article <2971@psuvax1.psu.edu> schwartz@gondor.psu.edu (Scott E. Schwartz) writes:
[some misinformation about Primos shared libraries]

Primos has had shared libraries for > 5 years.  Prior to Primos 19.4,
they only could be installed from the console, at a fixed address
determined by the system administrator.  Primos shared libraries are
implemented via an indirect call through a pointer to the routine name
with the fault bit set.  (Hardware support) The pointer is replaced by
one to the actual address, so future calls to the same function just
go through an additional level of inderection.  (The pointer must be
in the data section rather than the code section for EPFs.)

At Primos 19.4, an additional mechinism was added so additional
libraries could be searched, via a per-user library search list.  The
system administrator can modify the system wide default search rules.

These new shared libararies are a special case of EPFs.  EPFs are
programs that are dynamicly linked into the user's address space (at a
segment boundary) and are shared between users.  EPFs have been around
since Primos 19.0, but were not documented and fully supported till
Primos 19.4.

I seriously doubt any performance hit at Primos 20.0 can be attributed
to shared libraries, unless the system administrator made a serious
mistake in the default search rules.  It's hard to do a remote
diagnosis on limited information, but it sounds like the system has
inadiquate memory and/or problems related to the new disk format.
(Possibly due to improper installation of 20.0.)

Ps:  We went to Primos 21.0 for improved performance on one of our
large systems.
--
Bob Larson		Arpa: Blarson@Ecla.Usc.Edu
Uucp: {sdcrdcf,cit-vax}!oberon!skat!blarson		blarson@skat.usc.edu
Prime mailing list (requests):	info-prime-request%fns1@ecla.usc.edu

daveb@geac.UUCP (Dave Collier-Brown) (10/04/87)

In article <2208@umn-cs.UUCP> stachour@umn-cs.UUCP writes:
| That's why Multics has had search-paths since day-1.  That's also why it
| has the phrase "-referencing_dir" in its rules, so that programs pointed
| at an old version of a dynamically-linked shared library thereby
| getting the first of a set of routines from that particular version
| of the library will get the rest of the set that match.
| 
| (Multics also has done up-referencing and down-referenceing of data,
| argument-interogation, compatible-incompatible replacements,
| system_library_obsolete, and other ways to handle multiple versions
| of a program, but that's another 20-year old story that few read,
| because it works so well no-one needs to read or change the code.)

  I suspect, from working with a programmer (Andrew Forber) who put
up an almost-virtual-memory system in place of an overlay loader,
that a proper, elegant and even reasonably efficent memory manager of
the Multics sort could be put into Minix without seriously impacting
either its performance or size.

  Yes, I'm suggesting that a page-and-segment manager could be smaller
than a swapper and some bits of the file system associated with it.

 --dave (ever notice that sV and 4.3 are *bigger* than Multics) c-b
-- 
 David Collier-Brown.                 {mnetor|yetti|utgpu}!geac!daveb
 Geac Computers International Inc.,   |  Computer Science loses its
 350 Steelcase Road,Markham, Ontario, |  memory (if not its mind)
 CANADA, L3R 1B3 (416) 475-0525 x3279 |  every 6 months.

guy%gorodish@Sun.COM (Guy Harris) (10/04/87)

> Needless to say, the Amiga OS is not Unix, for one thing, Unix isn't
> going to run on a 512K machine with no MMU and two floppies. So they
> don't have to worry about backward compatibility with Version 7 &
> earlier programs written 15 years ago like Sun does :-).

Well, that may have been *part* of the reason the scheme described in the Sun
paper doesn't require libraries to be explicitly opened, but it's not all of
it.  Since you tell the linker which libraries you're linking with, it can
leave information about which shared libraries are needed around for the
startup code to use, so that code can do the opening and attaching for you.
This scheme could *permit* you to open libraries at any time, or close them, if
it wanted, but it doesn't *require* you to do so, regardless of which library
you're using.

Such a scheme follows the dictum "simple things should be simple" (i.e., if you
want to call a routine, you just write a call in your code and tell the linker
which libraries to look in) and "complicated things should be possible" (while
the scheme described in that paper doesn't include a programmatic interface to
the run-time loader, it doesn't prevent such an interface from being provided,
and lists this as a future direction).

(Multics, one of the original systems that provided dynamic linking to shared
libraries, didn't require you to explicitly attach these libraries either.)
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

guy%gorodish@Sun.COM (Guy Harris) (10/05/87)

>  --dave (ever notice that sV and 4.3 are *bigger* than Multics) c-b

OK:

	1) What are the definitions of "sV", "4.3", and "Multics" here?  Do you
	   mean the kernel (in the case of the UNIX systems) and the hardcore
	   supervisor (in the case of Multics), or do you mean the entire
	   system in all cases?

	   If you mean the former, is the Multics TCP/IP code part of the
	   hardcore supervisor?  If so, are you comparing the sizes of 4.3 with
	   TCP/IP and Multics with TCP/IP, both without TCP/IP, or are you
	   comparing apples and oranges?

	2) With the aforementioned questions answered, what *are* the sizes in
	   question?
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

wesommer@athena.mit.edu (William E. Sommerfeld) (10/05/87)

In article <29944@sun.uucp> guy%gorodish@Sun.COM (Guy Harris) writes:
>	   .... is the Multics TCP/IP code part of the
>	   hardcore supervisor?  

No, it's not.  There is some device driver support (I'm not familiar
with the details) in hardcore, but the bulk of the code runs in the
user ring (or at least outside of hardcore) in a daemon process.
Because of this, it's possible to completely stop TCP, install a new
version, and restart it while the system is still running.  It's also
very unlikely that a failure in the TCP/IP code will cause the entire
system to crash.

As for sizes.. Looking in >lib>net>e on MIT-Multics, I see about 50
records (of 1024 36 bit words each) of object segments which corespond
to the BSD /sys/netinet kernel code.  That's about 230K 8-bit bytes..
about 1/2 of a 4.3BSD+NFS VAX kernel (including the symbol table).

					- Bill

aegl@root.co.uk (Tony Luck) (10/05/87)

In article <2114@sfsup.UUCP> shap@sfsup.UUCP (J.S.Shapiro) writes:
>There is a technique which can be used to avoid all this which is to have
>an indirection table and a directory in each library, or a
>well-known-globals list, as someone suggested, but this implies a
>remarkable performance hit.

On many architectures I wouldn't call this a "remarkable" performance hit
consider the humble 68000 - and a scheme that involves a "jsr" to a known
location in a jump table within the shared library which then does a "jmp"
to the real function. Thus a typical function call will do at least

	jsr <entry point in jump table>			20 cycles
	jmp <real address of function>			12 cycles
	rts <back to user>				16 cycles

so a null function will waste 12 out of 48 cycles i.e. 25% - this *is*
a "remarkable" performance hit ... but consider a real function with some
local stack frame to set up and 3 registers to save/restore:

	jsr <entry point in jump table>			20 cycles
	jmp <real address of function>			12 cycles
	link <set up local stack frame>			18 cycles
	movm <save three registers>			38 cycles
	 :
	movm <restore three registers>			38 cycles
	unlk <restore previous frame>			12 cycles
	rts <back to user>				16 cycles

Here the overhead is 12 cycles out of 154 i.e. just under 8% - still bad,
bu not a complete disaster - and we still haven't counted any cycles for
the function doing anything at all useful just a fairly typical function
call overhead.

I guess that this general overhead will not be hugely different on other
architectures.

-Tony Luck (Porting manager - Root Computers Ltd., London)

guy%gorodish@Sun.COM (Guy Harris) (10/05/87)

> As for sizes.. Looking in >lib>net>e on MIT-Multics, I see about 50
> records (of 1024 36 bit words each) of object segments which corespond
> to the BSD /sys/netinet kernel code.  That's about 230K 8-bit bytes..
> about 1/2 of a 4.3BSD+NFS VAX kernel (including the symbol table).

Good, somebody has access to real and current data....

	1) How much of the 50 pages are code+data as opposed to symbol table?
	   (I haven't used a Multics system in ages, but I presume it has
	   some way of finding out how much of the segment code and data,
	   right?)

	2) How does the size of the hardcore supervisor plus the TCP/IP code
	   compare to the size of a 4.3BSD kernel (with NFS if the Multics in
	   question includes a similar sort of distributed file system,
	   without NFS if it doesn't)?

I presume the VAX figure is for the entire kernel, not just the "/sys/netinet"
code.
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

henry@utzoo.UUCP (Henry Spencer) (10/06/87)

> Needless to say, the Amiga OS is not Unix, for one thing, Unix isn't
> going to run on a 512K machine with no MMU and two floppies...

You might be interested to know that early versions of Unix ran on machines
on which 512KB would have been considered an impossible dream, with no MMU,
and with hard disks not much bigger or faster than modern floppies.  It was
a bit primitive by modern standards, but it did work.

Minix is said to run fine on a 512K ST.
-- 
PS/2: Yesterday's hardware today.    |  Henry Spencer @ U of Toronto Zoology
OS/2: Yesterday's software tomorrow. | {allegra,ihnp4,decvax,utai}!utzoo!henry

wesommer@athena.mit.edu (William Sommerfeld) (10/08/87)

In article <30034@sun.uucp> guy%gorodish@Sun.COM (Guy Harris) writes:
>Good, somebody has access to real and current data....
[regarding the size of Multics]

The Multics hardcore supervisor (roughly equivalent to the UNIX kernel
less its network & socket code) is about 160,000 lines of PL/1 and
70,000 lines of "non-PL/1" (assembler, mostly).

By comparison, the equivalent part of the VAX BSD+NFS kernel (/sys/h,
/sys/ufs, /sys/vax, /sys/vaxuba, /sys/vaxmba, and /sys/sys less
/sys/sys/uipc_*) is about 110,000 lines, of which about half are
device drivers; of the device drivers, maybe 1/4 are in use on an
average system.  So, it's about 230,000 lines for Multics vs. about
65,000 for an equivalent section of BSD unix.

For the *total* Multics system as of 1985, someone counted about 2
millon lines of PL/1 source, 224 thousand lines of assembler source,
and 165 thousand lines of "other source", compiled into 10 million 36
bit words of object code (that's 45 MB).

					- Bill

elg@usl (Eric Lee Green) (10/11/87)

in article <8714@utzoo.UUCP>, henry@utzoo.UUCP (Henry Spencer) says:
> Xref: usl-pc comp.arch:776 comp.unix.wizards:1359 comp.os.misc:161
>> Needless to say, the Amiga OS is not Unix, for one thing, Unix isn't
>> going to run on a 512K machine with no MMU and two floppies...
> You might be interested to know that early versions of Unix ran on machines
> on which 512KB would have been considered an impossible dream, with no MMU,
> and with hard disks not much bigger or faster than modern floppies.  It was
> a bit primitive by modern standards, but it did work.

I assume you are talking about the PDP-11. The PDP-11 does have a MMU,
which does address relocation (i.e. you have a virtual address space
of 64K, located in a larger physical address space of 256K to
4 megabytes depend on 18, 20, or 22 bit models). I've never used a
PDP-11 myself, but I have the architecture manual for the critter
sitting somewhere in my closet... what the world needs is a 32 bit
PDP-11 :-).

I am not certain as to how Minix is implemented on a 68000-based
machine. Probably they use the old pc-relative-relocatable-code trick
of OS9-68K, or use an intelligent relocating loader like the AmigaDOS
loader. Although how you would handle forks is beyond me (all your
pointers in stuff on the heap would be pointing back to the old copy
of the heap, not to the current one that you just generated!).
However, suffice it to say that Minix is not "full" Unix, not even v7
(no swapping, for one thing) and that 68000 machine code is nowhere
near as terse and compact as PDP-11 or even eighty-eighty-sux (8086).

--
Eric Green  elg@usl.CSNET       from BEYOND nowhere:
{ihnp4,cbosgd}!killer!elg,      P.O. Box 92191, Lafayette, LA 70509
{ut-sally,killer}!usl!elg     "there's someone in my head, but it's not me..."

mahler@usl-pc.UUCP (Stephen J. Mahler) (10/11/87)

In article <286@usl> elg@usl (Eric Lee Green) writes:
>in article <8714@utzoo.UUCP>, henry@utzoo.UUCP (Henry Spencer) says:
>> Xref: usl-pc comp.arch:776 comp.unix.wizards:1359 comp.os.misc:161
>>> Needless to say, the Amiga OS is not Unix, for one thing, Unix isn't
>>> going to run on a 512K machine with no MMU and two floppies...
>> You might be interested to know that early versions of Unix ran on machines
>> on which 512KB would have been considered an impossible dream, with no MMU,
>> and with hard disks not much bigger or faster than modern floppies.  It was
>> a bit primitive by modern standards, but it did work.
>
>I assume you are talking about the PDP-11. The PDP-11 does have a MMU,
>which does address relocation (i.e. you have a virtual address space
>of 64K, located in a larger physical address space of 256K to
>4 megabytes depend on 18, 20, or 22 bit models). I've never used a
>PDP-11 :-).
>

DEC machines of PDP design included versions that did not include
MMUs.  The two that jump to mind are the pdp-11/20-15-10 and the
LSI-11/03 series.  UNIX ran on these machines.  The straight forward
approach was called MINI-UNIX.  It had features between V6 and V7
and the only real stripping of the OS was the removal of account "groups"
(/etc/group, newgrp, chgrp, etc.).  It did run on systems that had only
two floppies (like the AED 8 inch floppy system with about 1 meg per
drive).  It ran well, even from the portable generator on the
corn combine !!   .........  Steve

hunt@spar.SPAR.SLB.COM (Neil Hunt) (10/12/87)

In article <439@root44.co.uk> aegl@root44.UUCP (Tony Luck) writes:
>>There is a technique [...] but this implies a
>>remarkable performance hit.
>
>On many architectures I wouldn't call this a "remarkable" performance hit
>[...] Thus a typical function call will do at least
>
>	jsr <entry point in jump table>			20 cycles
>	jmp <real address of function>			12 cycles
>	rts <back to user>				16 cycles
>
>[...] consider a real function with some
>local stack frame to set up and 3 registers to save/restore:
>
>	jsr <entry point in jump table>			20 cycles
>	jmp <real address of function>			12 cycles
>	link <set up local stack frame>			18 cycles
>	movm <save three registers>			38 cycles
>	 :
>	movm <restore three registers>			38 cycles
>	unlk <restore previous frame>			12 cycles
>	rts <back to user>				16 cycles
>
>Here the overhead is 12 cycles out of 154 i.e. just under 8% - still bad,
>bu not a complete disaster - and we still haven't counted any cycles for
>the function doing anything at all useful just a fairly typical function
>call overhead.
>
>I guess that this general overhead will not be hugely different on other
>architectures.
>
>-Tony Luck (Porting manager - Root Computers Ltd., London)

We are considering 12 cycles on a (say) 25MHz cpu, or 0.5 uS.

Now lets consider the performance when we don't have shared libraries,
and a function call hits a page fault:

	jsr <function-on-another-page>			20 cycles

	page fault interrupt				200 cycles
	disc latency delay				300000 cycles
	50000 instructions to bring in the page		750000 cycles
	1000 instructions in OS restoring state		15000 cycles

	movm <save registers>
	...


I think I will opt for the shared libraries, as soon as 
I can get my hands on SunOS releas 4.0 !

Neil/.

jpdres10@usl-pc.UUCP (Green Eric Lee) (10/13/87)

Distribution:

Keywords:

Summary:

Expires:

Sender:

Followup-To:


In message <73@usl-pc.UUCP>, mahler@usl-pc.UUCP (Stephen J. Mahler) says:
>> Needless to say, the Amiga OS is not Unix, for one thing, Unix isn't
>> going to run on a 512K machine with no MMU and two floppies...
>DEC machines of PDP design included versions that did not include
>MMUs.  The two that jump to mind are the pdp-11/20-15-10 and the
>LSI-11/03 series.  UNIX ran on these machines.  The straight forward
>approach was called MINI-UNIX.  It had features between V6 and V7
>and the only real stripping of the OS was the removal of account "groups"
>(/etc/group, newgrp, chgrp, etc.).  It did run on systems that had only
>two floppies (like the AED 8 inch floppy system with about 1 meg per
>drive).  It ran well, even from the portable generator on the
>corn combine !!   .........  Steve

I suspect that this "MINI-UNIX" was the version that swapped the
entire used address space whenever there was a context switch. I
seriously doubt whether it was as powerful as the current crop of
68000/no MMU OS's such as OS-9 and TRIPOS (AmigaDOS), which give up
the "fork" call in favor of a "spawn" call in order to put more than
one process in a single address space (with fork, you'd have to
re-adjust every pointer on the heap when you duplicated it for the new
process, or else keep tags hanging around telling you the data type --
anathema for "C").

In any event, this whole ball of hair started with "Well, why doesn't"
<insert machine name here> "run Unix instead of some odd-ball
operating system that I've never heard of?" The answer, once again, is
because <insert machine name> was designed to cost around $1,000, and
thus left out hardware (MMU & hefty hard drive) necessary to run a
MODERN Unix (Sys V.2 or BSD -- forget all these v6's and such, nobody
would WANT to use something like that nowadays, although I'm sure it
was revolutionary enough originally).

--
Eric Green  elg@usl.CSNET       from BEYOND nowhere:
{ihnp4,cbosgd}!killer!elg,      P.O. Box 92191, Lafayette, LA 70509
{ut-sally,killer}!usl!elg     "there's someone in my head, but it's not me..."

steve@edm.UUCP (Stephen Samuel) (10/13/87)

In article <8714@utzoo.UUCP>, henry@utzoo.UUCP (Henry Spencer) writes:
> > Needless to say, the Amiga OS is not Unix, for one thing, Unix isn't
> > going to run on a 512K machine with no MMU and two floppies...
I have every reason to believe it COULD do so: The first Radio Shack Model
16's were 512K machines and were distributed on 512K floppies.
 
 The neat thing about it was that the first floppy was actually a XENIX system
disk. 
 With a little fooling around I was able to get a minimal system to boot on 
a singld-sided (500k) floppy disk -- The big problem was that I needed to set
aside ~200k for swap space for FSCK which left little room for more than a
minimal subset of /bin.

I was, however able to get a relatively nice working set (incl VI and CC) 
onto 2 double-sided floppies at the request of the alberta research council.
 
 Other than the fact that the M16 had an MMU, this almost proves the point
by example.
-- 
-------------
 Stephen Samuel 			Disclaimer: You betcha!
  {ihnp4,ubc-vision,seismo!mnetor,vax135}!alberta!edm!steve
  BITNET: USERZXCV@UQV-MTS

jfh@killer.UUCP (The Beach Bum) (10/15/87)

In article <192@edm.UUCP>, steve@edm.UUCP (Stephen Samuel) writes:
> In article <8714@utzoo.UUCP>, henry@utzoo.UUCP (Henry Spencer) writes:
> > > Needless to say, the Amiga OS is not Unix, for one thing, Unix isn't
> > > going to run on a 512K machine with no MMU and two floppies...
> I have every reason to believe it COULD do so: The first Radio Shack Model
> 16's were 512K machines and were distributed on 512K floppies.
  
What about LSX? (Unix for LSI-11's)?  I seem to recall that there was a
thing that ran unprotected on 11/03's in 56K on RX01's.

>  The neat thing about it was that the first floppy was actually a XENIX system
> disk. 
>  With a little fooling around I was able to get a minimal system to boot on 
> a singld-sided (500k) floppy disk -- The big problem was that I needed to set
> aside ~200k for swap space for FSCK which left little room for more than a
> minimal subset of /bin.

I too did that.  The hard part was building the first double sided root
disk.  I also had the FSCK crash bug.  Seems you couldn't run FSCK standalone
off of the floppy disk, so if you had real bad corruption, you might have
to scrag the disk and start from backups.

> I was, however able to get a relatively nice working set (incl VI and CC) 
> onto 2 double-sided floppies at the request of the alberta research council.

I have gone one step wierder.  I have a number of dual-partition floppies
(details on request) that are partitioned into binaries and sources.  Each
partition is 608K which is enough for gov'ment work.  One set is a liberal
collection of kernel sources I have come up with (mostly wierd device
drivers and a re-write of mdep/malloc.c with best-fit allocation).  That's
the environment I use for my stranger kernel exploits.

>  Other than the fact that the M16 had an MMU, this almost proves the point
> by example.
> -- 
>  Stephen Samuel 			Disclaimer: You betcha!

You call what the 16 had an MMU?  It has a pair of base and length registers.
The VA of one set is 0, the VA for the other is 800000.  Separate I&D loads
the text at 800000, which is kind of strange.  The MMU limits the addressable
memory to 1MB as the length is 8 bits with a 4K click.  Not much of an MMU.

- John.
-- 
John F. Haugh II		HECI Exploration Co. Inc.
UUCP:	...!ihnp4!killer!jfh	11910 Greenville Ave, Suite 600
"Don't Have an Oil Well?"	Dallas, TX. 75243
" ... Then Buy One!"		(214) 231-0993

lindsay@cheviot.newcastle.ac.uk (Lindsay F. Marshall) (10/15/87)

In article <286@usl> elg@usl (Eric Lee Green) writes:
>... what the world needs is a 32 bit PDP-11 :-).

It's called a VAX..........

Why does everyone talk about the 11 as though there werent thousands of
them running away happily all over the world? (Come to that what about
all the 8's there must be!)

Lindsay

-- 
Lindsay F. Marshall
JANET: lindsay@uk.ac.newcastle.cheviot  ARPA: lindsay%cheviot.newcastle@ucl-cs
PHONE: +44-91-2329233                   UUCP: <UK>!ukc!cheviot!lindsay
"How can you be in two places at once when you're not anywhere at all?"

peter@sugar.UUCP (Peter da Silva) (10/16/87)

> DEC machines of PDP design included versions that did not include
> MMUs. ...  UNIX ran on these machines. ...  It had features between V6
> and V7 and the only real stripping of the OS was the removal of account
> "groups" (/etc/group, newgrp, chgrp, etc.).

So tell me... how did they implement fork() without an MMU? Did they do
addres fixups on running code, or did they require all code be relocatable
(not an unreasonable request on the PDP-11, by the way, and one place where
the 11 has the 68000 beat hollow).
-- 
-- Peter da Silva  `-_-'  ...!hoptoad!academ!uhnix1!sugar!peter
-- Disclaimer: These U aren't mere opinions... these are *values*.

steve@edm.UUCP (Stephen Samuel) (10/20/87)

In article <1420@spar.SPAR.SLB.COM>, hunt@spar.SPAR.SLB.COM (Neil Hunt) writes:
> In article <439@root44.co.uk> aegl@root44.UUCP (Tony Luck) writes:
> >>There is a technique [...] but this implies a
> >>remarkable performance hit.
> Now lets consider the performance when we don't have shared libraries,
> and a function call hits a page fault:
> 
> 	jsr <function-on-another-page>			20 cycles
> 	page fault interrupt				200 cycles
> 	disc latency delay				300000 cycles
> 	50000 instructions to bring in the page		750000 cycles
> 	1000 instructions in OS restoring state		15000 cycles
> 	movm <save registers>
> 	...
Under the Michigan Terminal System (MTS), about 6 meg of the user
address space is dedicated to shared memory (incl. unreadable
system tables and at LEAST 1-2 meg of shared routines). In some
cases, some of the more popular programs have their shared text
(almost) entirely in shared space.

 On a good day, the system can support 5-700 people before it
gets into trouble (it runs on an amdahl 5880 [I think]). Another
nice feature is that program files can be real small. A nice
'pr'-like program is well under 4K long. The APL Interpreter is a
whole 8 bytes long (a pointer to the low core tables :-).

With a nice, efficient loader, you can limit the cost of the linking to 
the actual loading.



-- 
-------------
 Stephen Samuel 			Disclaimer: You betcha!
  {ihnp4,ubc-vision,seismo!mnetor,vax135}!alberta!edm!steve
  BITNET: USERZXCV@UQV-MTS

mahler@usl-pc.UUCP (Stephen J. Mahler) (10/21/87)

In article <881@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
>> DEC machines of PDP design included versions that did not include
>> MMUs. ...  UNIX ran on these machines. ...  It had features between V6
>> and V7 and the only real stripping of the OS was the removal of account
>> "groups" (/etc/group, newgrp, chgrp, etc.).
>
>So tell me... how did they implement fork() without an MMU? Did they do
>addres fixups on running code, or did they require all code be relocatable
>(not an unreasonable request on the PDP-11, by the way, and one place where
>the 11 has the 68000 beat hollow).
>-- 
Only one proc was in memory at a time.  When swtch() was called the
entire proc moved to swap.  Considering the ~ 20K (word) proc space
(small by todays standards ... but reasonable back then) and the
speed of the CPU it made a good match with floppies that were DMA.
On the other hand DECs RX01 floppies were PIO, now that was slow
(but it to was also working at some locations)!
    .... Steve   KF5VH

tainter@ihlpg.ATT.COM (Tainter) (10/26/87)

In article <2455@cheviot.newcastle.ac.uk>, lindsay@cheviot.newcastle.ac.uk (Lindsay F. Marshall) writes:
> In article <286@usl> elg@usl (Eric Lee Green) writes:
> >... what the world needs is a 32 bit PDP-11 :-).
> It's called a VAX..........
Despite the fact that it has a PDP 11 submode the VAX is very definately
not a 32 bit PDP 11.  The beauty of the PDP 11 the low percentage of
special case nonsense.  The VAX is almost all special case goop.
(Admittedly, it isn't anywhere near as bad as the intel 80* family of
satanic torture equipment.)
> Lindsay F. Marshall