[net.unix-wizards] Any details on "the Newcastle Connec

jmc@root44.UUCP (05/28/83)

We felt that the idea of the "Newcastle Connection" suffered from the
following problems.

1.	'Chroot' not blocking 'cd /..' was a v7 bug fixed in System III
	and, I believe BSD.  It is therefore arguably a retrograde step
	to reintroduce what people have deemed a bug and have dealt with.

2.	You have to recompile every program you ever heard of (for every
	new release of the package, or where you want to change some network
	protocol) which MIGHT POSSIBLY EVER want to talk to a remote machine,
	to include the appropriate version of /lib/libc.a
	Obviously this must include your shell (on every machine), plus all
	the utilities.  Personally, I prefer to hack the kernel.

3.	I would require convincing that the right things get executed on the
	right machines with the right links when you do things like

		cd /../m1/dir1
		../../m2/bin/xyz | ../../m3/bin/lpr ../../m4/dir2/file

	I am sure that you need some kind of syntax to do routing.

4.	Having everything in user space worries me - can you really recover
	happily from every interrupt/quit/kill -9?

5.	Every path name and file descriptor has to be scrutinised -
	sounds rather expensive, but statistics required.

All in all, a nice idea, but I think it's the wrong way to implement it.
Any comments on my views?

John Collins,
Root Computers Ltd,
	....!vax135!ukc!root44!jmc

martin@dciem.UUCP (Martin Tuori) (06/01/83)

I'd like to respond to some of the points raised in John Collins review
of the networking software called 'The Newcastle Connection', aka
'Unix United' from The U. of Newcastle. I visited Newcastle two months
ago, and had a good look inside their system.

His items 1,2, and 5 question some of the implementation decisions,
and the resulting performance of the software. The current version
of TNC was deliberately implemented in user code, to maximize its
portability. It can be moved to a new UNIX on a new machine much
more quickly than if it had been placed in the kernel. The folks at
Newcastle expect that the big vendors will choose to re-implement it
more efficiently to suit their individual systems.

In item 2, John raises the question of cost of recompilation when
a new version comes along. I don't see that this is any different
from other network software architectures; if 3COM sends you a new
distribution of UNET, you will have to recompile every program that uses the
network -- same result.

I can assure you that TNC works, and that file creation, I/O, process
activation, and signals are properly handled. What is surprising
about the scheme is how easily everything follows from what is a very simple
idea -- extending the filesystem namespace to include other systems'
filesystems. The recent article in Software -- Practice and Experience
did not prepare me for how easy TNC is to use.
Herein lies its strength; a user wishing to use the network
need learn one main rule -- the extended name space. Thereafter the
commands, their arguments, and their combination by pipes, redirection, etc.
follow normal UNIX conventions. In other network schemes I have used,
the user is forced to learn a new syntax for file transfer, and new
syntax for remote execution, etc. This saving in learning time is, to me,
THE KEY ISSUE; implementation details are secondary.

One last comment that came up in talking with people at Newcastle --
the hardest and most confusing parts of TNC arise from those parts of
the user-to-UNIX dialogue that are not part of the filesystem namespace.
Files, directories and devices are straightfoward; but if you want
to mail to john on machine b, mail /../b/john isn't right. Mail uses
a namespace separate from the filesystem, ie. the password file.
Similarily, ps and kill operate on yet another namespace, numerical
process ids; so you cannot say 'kill /../b/4073'. Both of these
namespaces could be incorporated into the normal namespace, by provision
(as for devices) of stubs -- say /procs/4073, and /users/john. The
unifying concept of the UNIX filesystem could be carried a step further.

I suppose it's obvious that I think very highly of their work.
In fairness, it's not a new idea; I can remember similar suggestions
arising several years ago. But I hope it's an idea that will bear
fruit this time. I look forward to more discussion.
                            Martin Tuori
                            ...decvax!utzoo!dciem!martin

trt@rti.UUCP (06/10/83)

There have been a few articles on the Newcastle approach
to providing a distributed UNIX system.
What about other peoples'?
Like something done at Purdue?
And a couple of nets at Berkeley, as I recall.

Is the Newcastle connection the best such system?
	Tom Truscott