[alt.security] Hard links to directories: why not?

wiml@milton.u.washington.edu (William Lewis) (07/18/90)

   In the man entry for ln(1) (and for link(2)),  it says that
hard links may not be made to directories, unless the linker is
the super-user (in order to make '.' and '..', I suppose). My 
question is: why not? (and is there any reason that I, if I'm
root, shouldn't do this?) It seems perfectly harmless to me, although 
it would allow the user to make a pretty convoluted directory structure,
that's the user's priviledge. So I suppose it's probably a security
issue somehow (restrictions of this sort seem to be). Hence the
crosspost to alt.security. 
-- 
wiml@blake.acs.washington.edu       Seattle, Washington  | No sig under
(William Lewis)  |  47 41' 15" N   122 42' 58" W  |||||||| construction

schuman@sgi.com (Aaron Schuman) (07/19/90)

Wiliiam Lewis>	In the man entry for ln(1) (and for link(2)),
Wiliiam Lewis>	it says that hard links may not be made to directories,
Wiliiam Lewis>	unless the linker is the super-user ...
Wiliiam Lewis>	My question is: why not?
Wiliiam Lewis>	It seems perfectly harmless to me, although 
Wiliiam Lewis>	it would allow the user to make a pretty convoluted
Wiliiam Lewis>	directory structure, that's the user's privilege.

I don't know of any way that an ordinary user could parlay the ability
to make hard links to a directory into obtaining superuser status.

But that is not the only reason why some system calls are restricted.
A foolish user could create loops in the directory structure.
Lots of file system functions depend on the absence of loops in
order to guarantee completion.  Some system calls would never return.


Wiliiam Lewis>	So I suppose it's probably a security issue somehow

Denial of service is sometimes considered a security issue,
and sometimes considered just a matter of proper administration.
Choose your own taxonomy of admin nightmares.

					Aaron

smb@ulysses.att.com (Steven Bellovin) (07/19/90)

In article <5222@milton.u.washington.edu>, wiml@milton.u.washington.edu (William Lewis) writes:
> 
>    In the man entry for ln(1) (and for link(2)),  it says that
> hard links may not be made to directories, unless the linker is
> the super-user (in order to make '.' and '..', I suppose). My 
> question is: why not? (and is there any reason that I, if I'm
> root, shouldn't do this?) It seems perfectly harmless to me, although 
> it would allow the user to make a pretty convoluted directory structure,
> that's the user's priviledge. So I suppose it's probably a security
> issue somehow (restrictions of this sort seem to be). Hence the
> crosspost to alt.security. 

I quote from the original Ritchie and Thompson paper:

	The directory structure is constrained to have the form of a
	rooted tree.  Except for the special entries ``.'' and ``..P'',
	each directory must appear as an entry in exactly one other
	directory, which is its parent.  The reason for this is to
	simplify the writing of programs that visit subtrees of the
	directory structure, and more important, to avoid the
	separation of portions of the hierarchy.  If arbitrary links to
	directories were permitted, it would be quite difficult to
	detect when the last connection from the root to a directory
	was severed.

No need for excess paranoia...

jones@pyrite.cs.uiowa.edu (Douglas W. Jones,201H MLH,3193350740,3193382879) (07/19/90)

William Lewis asked why UNIX doesn't allow hard links to directories.

I don't think the reason has anything to do with security.  My understanding
of the problem is that it would allow the creation of circularly linked
structures that the crude reference count scheme of UNIX could not deal
with.  UNIX assumes that the only circular linkages in the directory tree
are . and .., and the special services "mkdir" and "rmdir" take care of
these special cases.

The Cambridge CAP file system demonstrated quite effectively that it is
perfectly possible to allow users to create arbitrary linkages between
directories and files, but this worked only because they invented a nice
combination of garbage collection and reference counts to handle the
problem of reclaiming circularly linked grabage.

The CAP file system was capability based, but the decision to allow
directories to be circularly linked is independent of the access control
mechanisms they used on their capabilities.  In UNIX terms, the CAP
file system can be thought of as having an access rights field like that
in each UNIX I-node, but this was stored in the link to the file, so each
link could confer different access rights to the file.

					Doug Jones
					jones@herky.cs.uiowa.edu

cpcahil@virtech.uucp (Conor P. Cahill) (07/19/90)

In article <5222@milton.u.washington.edu> wiml@milton.u.washington.edu (William Lewis) writes:
>
>   In the man entry for ln(1) (and for link(2)),  it says that
>hard links may not be made to directories, unless the linker is
>the super-user (in order to make '.' and '..', I suppose). My 
>question is: why not? (and is there any reason that I, if I'm
>root, shouldn't do this?) It seems perfectly harmless to me, although 
>it would allow the user to make a pretty convoluted directory structure,

The big (and I mean REAL BBBBIIIIGGGG) problem with hard linking directories
is that find does not know how to recognize and handle them.  When find
processes a file system it actually cd's to each directory and then cds to ..
to go back.  When you have two directories linked to gether a cd to .. in
either directory will always go to the same parent directory.  If both
are at the same exact place in the file system you would be ok, but if
they are at different levels (different paths (other than basename) find
will end up skipping some of your file system.

Now you might say that you don't care that much about find.  That is, you 
might say this until you realize that find is used as a main portion of the
backup scheme on many systems, so your backups will get screwed up.

Anyway, that is one problem.  There probably are others with equally 
disasterous results.

>that's the user's priviledge. So I suppose it's probably a security
>issue somehow (restrictions of this sort seem to be). Hence the
>crosspost to alt.security. 
>-- 
>wiml@blake.acs.washington.edu       Seattle, Washington  | No sig under
>(William Lewis)  |  47 41' 15" N   122 42' 58" W  |||||||| construction


-- 
Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160
                                                Sterling, VA 22170 

rbn@umd5.umd.edu (Ron Natalie) (07/19/90)

And where whould you have .. point if you had multiple links to
a directory?  Things would get pretty wierd.

-Ron

shields@yunexus.YorkU.CA (Paul Shields) (07/19/90)

wiml@milton.u.washington.edu (William Lewis) writes:
>   In the man entry for ln(1) (and for link(2)),  it says that
>hard links may not be made to directories, unless the linker is
>the super-user (in order to make '.' and '..', I suppose). My 
>question is: why not? (and is there any reason that I, if I'm
>root, shouldn't do this?)

Imagine the fun a user could have with the following:

% ln . foo
% ln .. bar

It would annoy a lot of the utilities you might like to run, like
du, ls -R, etc.

>It seems perfectly harmless to me, although 
>it would allow the user to make a pretty convoluted directory structure,
>that's the user's priviledge. So I suppose it's probably a security
>issue somehow (restrictions of this sort seem to be). Hence the
>crosspost to alt.security. 

Well, perhaps the following could be hazardous:

# rm -r bar

Just a thought.

-- 
Paul Shields             shields@nccn.yorku.ca

P.S: on VAX/VMS 3.7 the above (with a different command set of course)
is possible. I don't know about old versions of UNIX.

mvadh@cbnews.att.com (andrew.d.hay) (07/19/90)

In article <6914@umd5.umd.edu> rbn@umd5.umd.edu (Ron Natalie) writes:
"And where whould you have .. point if you had multiple links to
"a directory?  Things would get pretty wierd.

to the first entry in the list of hard links, of course.
the tricky part would be remembering to update this when you unlink
the first entry...

-- 
Andrew Hay		+------------------------------------------------------+
Ragged Individualist	|	 But I thought we were *ALL* iconoclasts!      |
AT&T-BL Ward Hill MA	|		I was just trying to fit in!!!	       |
a.d.hay@att.com		+------------------------------------------------------+

mvadh@cbnews.att.com (andrew.d.hay) (07/19/90)

In article <10527@odin.corp.sgi.com> schuman@sgi.com (Aaron Schuman) writes:
[]
"A foolish user could create loops in the directory structure.

it would be easy to have ln disallow this:
1)	resolve argv[1] and argv[2] to absolute paths
2)	determine which path is shorter
3)	strncmp() both paths for the shorter length
4)	if you have a match, you're trying to create a loop

this would also let you safely mv directories...

-- 
Andrew Hay		+------------------------------------------------------+
Ragged Individualist	|	 But I thought we were *ALL* iconoclasts!      |
AT&T-BL Ward Hill MA	|		I was just trying to fit in!!!	       |
a.d.hay@att.com		+------------------------------------------------------+

boissier@irisa.fr (franck boissiere) (07/19/90)

From article <6914@umd5.umd.edu>, by rbn@umd5.umd.edu (Ron Natalie):
> 
> 
> And where whould you have .. point if you had multiple links to
> a directory?  Things would get pretty wierd.
> 
> -Ron

This problem doesnot seem so crucial for symbolic links. This leads to
the question what makes the problem crucial in one case and not in the
other? Another question is when is it more appropriate to use one or the
other?
--
Franck BOISSIERE                        boissier@irisa.irisa.fr
Prototyping Lab Manager                 boissier@ccettix.UUCP
C.C.E.T.T.   B.P. 59                    boissier%irisa.irisa.fr@uunet.uu.net
35512 CESSON SEVIGNE CEDEX  FRANCE    

martin@mwtech.UUCP (Martin Weitzel) (07/19/90)

In article <13432@ulysses.att.com> smb@ulysses.att.com (Steven Bellovin) writes:
[question about security issues when hard-linking to directories]
>In article <5222@milton.u.washington.edu>, wiml@milton.u.washington.edu (William Lewis) writes:
[qoute from a Ritchie and Thompson paper]

>	[...]  The reason for this is to
>	simplify the writing of programs that visit subtrees of the
>	directory structure, and more important, to avoid the
>	separation of portions of the hierarchy. [...]

Exactly!

For those of us who have no symlinks, hard-linking to a directory
(if possible) can sometimes avoid lot's of headaches. If you don't
create circular links, the only "surprise" may be that  ".." sometimes
is not what you normally expect. Minor problems may occur with find and
some backup programs, which may duplicate parts of the disk on the backup
media and hence use more space than expected or available.

BTW: A few years back I wrote a "directory compression program" which
only linked around the files to get rid of empty slots in directories,
that were large some day but shrunk in size later. (This *can* be
done with standard commands, but not in the most efficient way.)
If the directory which was to compress contained sub-directories,
things became a bit complicated, because the ".." entry of the
sub-directories had to be re-linked ... all in all it's a nice
exercise for students who want really to understand how the directory-
hierachies is implemented under unix :-)
-- 
Martin Weitzel, email: martin@mwtech.UUCP, voice: 49-(0)6151-6 56 83

greywolf@unisoft.UUCP (The Grey Wolf) (07/20/90)

In article <1990Jul18.235607.19403@virtech.uucp> cpcahil@virtech.UUCP (Conor P. Cahill) writes:
>In article <5222@milton.u.washington.edu> wiml@milton.u.washington.edu (William Lewis) writes:
>>
>>   In the man entry for ln(1) (and for link(2)),  it says that
>>hard links may not be made to directories, unless the linker is
>>the super-user (in order to make '.' and '..', I suppose). My 
>>question is: why not? (and is there any reason that I, if I'm
>>root, shouldn't do this?) It seems perfectly harmless to me, although 
>>it would allow the user to make a pretty convoluted directory structure,
>
[ some stuff about find deleted... ]
>
>Now you might say that you don't care that much about find.  That is, you 
>might say this until you realize that find is used as a main portion of the
>backup scheme on many systems, so your backups will get screwed up.

Outside of the fact that "dump" *should* be the way to go to back up
systems (religious issue -- flames to Email!), the problem with hard-
linking directories is indeed a security issue at one point.
Consider the user who knows that he is chroot(2)ed somewhere.  If he could,
via another account, make a hard link to somewhere upward of his chroot(2)ed
point (assuming that his new root is not the root of a separate file system)
then he could access things he wasn't meant to.

Another claim is somewhere in the rename(2) man page:

"CAVEAT
    The system can deadlock if a loop in the file system graph
    is present.  This loop takes the form of an entry in direc-
    tory "a", say "a/foo", being a hard link to directory "b",
    and an entry in directory "b", say "b/bar", being a hard
    link to directory "a".  When such a loop exists and two
    separate processes attempt to perform "rename a/foo b/bar"
    and "rename b/bar a/foo", respectively, the system may dead-
    lock attempting to lock both directories for modification.
    Hard links to directories should be replaced by symbolic
    links by the system administrator."

>>that's the user's priviledge. So I suppose it's probably a security
>>issue somehow (restrictions of this sort seem to be). Hence the
>>crosspost to alt.security. 

Well, it IS the user's privilege to make up a convoluted directory struc-
ture in his own namespace, but using symbolic links.  They're much more
easy to try and resolve, since you don't have to do an ncheck to find out
which directories have such-and-such an inode.

Now, WHY a user would need to make a namespace convoluted escapes me, but
the world is full of oddities, now, ain't it?

>>-- 
>>wiml@blake.acs.washington.edu       Seattle, Washington  | No sig under
>>(William Lewis)  |  47 41' 15" N   122 42' 58" W  |||||||| construction
>
>
>-- 
>Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
>uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160
>                                                Sterling, VA 22170 


-- 
-- once bitten, twice shy, thrice stupid --
MORALITY IS THE BIGGEST DETRIMENT TO OPEN COMMUNICATION.
/earth: minimum percentage of free space changes from 10% to 0%
should optimize for space^H^H^H^H^Hintelligence with minfree < 10%

flaps@dgp.toronto.edu (Alan J Rosenthal) (07/20/90)

boissier@irisa.fr (franck boissiere) writes:
>This problem does not seem so crucial for symbolic links.  This leads to
>the question what makes the problem crucial in one case and not in the
>other?

The fact that with symlinks, the real (hard) link is given a higher status
than the symbolic link.  So, for example, `find' can ignore symlinks and
follow hard links.  With two hard links, there's no local strategy (i.e.
a strategy whose action on small portions of the filesystem is determined
only by characteristics of that small portion) which follows only one,
except for otherwise distinguishable cases like `.' and `..'.

>Another question is when is it more appropriate to use one or the other?

Well, you have to use symlinks for directories.  Often links are used to
rearrange filesystems, like on workstations running sunos prior to v4 where
/usr is a shared filesystem but /usr/spool is private, so /usr/spool is a
symlink to /private/usr/spool; even if this were a plain file it couldn't be a
hard link because it crosses filesystems.  So, quite frequently you don't have
a choice.

When you do have a choice, I recommend symlinks for anything under maintenance,
because it's too easy to have hard links broken by moving files around.  It's
also very convenient that tar preserves symlinks.

I tentatively think it's good that things like the link from /bin/e to
/bin/ed are hard links, because it saves one block of disk space on tens of
thousands of machines, therefore saving tens of thousands of blocks.  On the
other hand, I've heard that Dennis Ritchie recommended that hard links (other
than . and ..) be removed when symlinks were added, so that there weren't two
fundamental ways to do something, and that sounds reasonable to me.

ajr

imp@dancer.Solbourne.COM (Warner Losh) (07/21/90)

In article <12877@yunexus.YorkU.CA> shields@yunexus.YorkU.CA (Paul
Shields) writes: 
>P.S: on VAX/VMS 3.7 the above (with a different command set of course)
>is possible. I don't know about old versions of UNIX.

The commands that existed in VMS 3.7 still existed in VMS 5.3.  I
don't know if they will still let you create cycles in the directory
structure.  I do know they were used to share files in a VAXcluster
because VMS didn't have symbolic links....

This hard linking on VMS has caused lots of trouble since it normally
isn't done.  BACKUP assumes that all files have one link (basically)
so restoring a disk that was backed up that had odd directory entries
like this caused the files to be duplicated, rather than re-linked.
This was only a problem for one type of backup (used to do
incrementals), not the full image (level 0) backups.

Don't know if they fixed it, but it sounds like a "denial of service"
security hole when the original disks are tight on space.  In
addition, the way that it was implemented caused a hole whereby
certain programs could read files that a user couldn't normally read.
Plan files with finger springs to mind......

Warner

--
Warner Losh		imp@Solbourne.COM
Boycott Lotus.		#include <std/disclaimer>

peter@ficc.ferranti.com (Peter da Silva) (07/21/90)

In article <1990Jul20.100456.20995@jarvis.csri.toronto.edu> flaps@dgp.toronto.edu (Alan J Rosenthal) writes:
> The fact that with symlinks, the real (hard) link is given a higher status
> than the symbolic link. With two hard links, there's no local strategy which
> follows only one, except for otherwise distinguishable cases like `.' and
> `..'.

Sure, tell find not to follow a directory if the inode of "foo/.." is not
the inode of ".". (i.e., treat it as a symbolic link)
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
<peter@ficc.ferranti.com>

subbarao@phoenix.Princeton.EDU (Kartik Subbarao) (07/21/90)

In article <6914@umd5.umd.edu> rbn@umd5.umd.edu (Ron Natalie) writes:
>
>
>And where whould you have .. point if you had multiple links to
>a directory?  Things would get pretty wierd.
>
>-Ron

Now, why could we not have .. point to a table which listed all the directories
above it?






subbarao@{phoenix,bogey or gauguin}.Princeton.EDU -|Internet
kartik@silvertone.Princeton.EDU (NeXT mail)       -|	
subbarao@pucc.Princeton.EDU		          - Bitnet

jfh@rpp386.cactus.org (John F. Haugh II) (07/22/90)

In article <1990Jul19.121048.16332@cbnews.att.com> mvadh@cbnews.att.com (andrew.d.hay) writes:
>it would be easy to have ln disallow this:
>1)	resolve argv[1] and argv[2] to absolute paths
>2)	determine which path is shorter
>3)	strncmp() both paths for the shorter length
>4)	if you have a match, you're trying to create a loop
>
>this would also let you safely mv directories...

So, how does this relate to the link() system call?  Permitting
a non-tree-like structure to exist is a really bad idea for all
of the reasons in the Dennis Ritchie quote.

Aren't there better things to worry about?  My favorite is why
doesn't the ln command require the use of a -f flag to blast a
target file?
-- 
John F. Haugh II                             UUCP: ...!cs.utexas.edu!rpp386!jfh
Ma Bell: (512) 832-8832                           Domain: jfh@rpp386.cactus.org

                                            Proud Pilot of RS/6000 Serial #1472

guy@auspex.auspex.com (Guy Harris) (07/22/90)

>this would also let you safely mv directories...

What, you can't safely "mv" them now?

henry@zoo.toronto.edu (Henry Spencer) (07/22/90)

In article <18461@rpp386.cactus.org> jfh@rpp386.cactus.org (John F. Haugh II) writes:
>Aren't there better things to worry about?  My favorite is why
>doesn't the ln command require the use of a -f flag to blast a
>target file?

On sane Unix systems, ln fails if the target file exists already.  On
AT&T System V UNIX(R) Operating Systems, it silently goes ahead.  Some
faceless imbecile in the hordes of System V UNIX(R) Operating System
developers thought it would be cute if ln, mv, and cp all worked the
same way.
-- 
NFS:  all the nice semantics of MSDOS, | Henry Spencer at U of Toronto Zoology
and its performance and security too.  |  henry@zoo.toronto.edu   utzoo!henry

jfh@rpp386.cactus.org (John F. Haugh II) (07/22/90)

In article <1990Jul22.035130.12559@zoo.toronto.edu> henry@zoo.toronto.edu (Henry Spencer) writes:
>In article <18461@rpp386.cactus.org> jfh@rpp386.cactus.org (John F. Haugh II) writes:
>>Aren't there better things to worry about?  My favorite is why
>>doesn't the ln command require the use of a -f flag to blast a
>>target file?
>
>On sane Unix systems, ln fails if the target file exists already.  On
>AT&T System V UNIX(R) Operating Systems, it silently goes ahead.  Some
>faceless imbecile in the hordes of System V UNIX(R) Operating System
>developers thought it would be cute if ln, mv, and cp all worked the
>same way.

Well, now I get to ask the next question ...

My [ second ] favorite question is why doesn't the SunOS ln command
permit the use of the -f flag for blasting an existent target flag?

Before answering that question, remember that USG's stupid behavior
existed before Sun's and that in the business world, one stupid
decision deserves another ;-)
-- 
John F. Haugh II                             UUCP: ...!cs.utexas.edu!rpp386!jfh
Ma Bell: (512) 832-8832                           Domain: jfh@rpp386.cactus.org

flaps@dgp.toronto.edu (Alan J Rosenthal) (07/22/90)

In article <10527@odin.corp.sgi.com> schuman@sgi.com (Aaron Schuman) writes:
>>A foolish user could create loops in the directory structure.

mvadh@cbnews.att.com (andrew.d.hay) writes:
>it would be easy to have ln disallow this:
>1)	resolve argv[1] and argv[2] to absolute paths
>2)	determine which path is shorter
>3)	strncmp() both paths for the shorter length
>4)	if you have a match, you're trying to create a loop

It's true that if they match, you're creating a loop.  However, it's not true
that if they don't match you're not creating a loop.
Here's a counterexample:

Suppose your filesystem is on /mnt.
Do this:

	mkdir /mnt/a
	ln /mnt/a /mnt/b
	mkdir /mnt/a/c
	ln /mnt/a /mnt/b/c/d

Now /mnt/a/c/d and /mnt/a are the same.  So you can refer to
/mnt/a/c/d/c/d/c/d/c/d/c/d, etc.

ajr

guy@auspex.auspex.com (Guy Harris) (07/23/90)

>My [ second ] favorite question is why doesn't the SunOS ln command
>permit the use of the -f flag for blasting an existent target flag?

Because the (4.3)BSD "ln" command doesn't seem to, either:

	auspex% cp /home/unix_src/bsd4.3/bin/ln.c .
	auspex% cc -o ln ln.c
	auspex% echo >foo
	auspex% echo >bar
	auspex% ./ln -f foo bar
	bar: File exists

and because the command sequence

	rm -f bar && ln -f foo bar

for example, would have done the job quite nicely....

jfh@rpp386.cactus.org (John F. Haugh II) (07/23/90)

In article <1990Jul22.111334.9996@jarvis.csri.toronto.edu> flaps@dgp.toronto.edu (Alan J Rosenthal) writes:
>Suppose your filesystem is on /mnt.
>Do this:
>
>	mkdir /mnt/a
>	ln /mnt/a /mnt/b
>	mkdir /mnt/a/c
>	ln /mnt/a /mnt/b/c/d
>
>Now /mnt/a/c/d and /mnt/a are the same.  So you can refer to
>/mnt/a/c/d/c/d/c/d/c/d/c/d, etc.

Your example is correct - unless /mnt/b/c resolves to /mnt/a/c,
the test will fail.  This can happen because the order of entries
in the directory affects the outcome of the conversion to an
absolute pathname.

The conversion of /mnt/a and /mnt/b/c/d to absolute pathnames
need only recognize that /mnt/a and /mnt/b are identical to know
that a loop is being created.  If "/mnt/a" is resolved either
"/mnt/a" or "/mnt/b" should be produced.  Performing the same
test on "/mnt/b/c" will produce the same prefix since the
resolution of "/mnt/b" and "/mnt/a" can implemented in such a
way as to produce the same result.

It is quite possible to detect the creation of loops in the
filesystem, but the expense and risk doesn't seem worth the
questionable benefit of being able to create randomly shaped
directory trees.
-- 
John F. Haugh II                             UUCP: ...!cs.utexas.edu!rpp386!jfh
Ma Bell: (512) 832-8832                           Domain: jfh@rpp386.cactus.org

flaps@dgp.toronto.edu (Alan J Rosenthal) (07/25/90)

jfh@rpp386.cactus.org (John F. Haugh II) writes:
>The conversion of /mnt/a and /mnt/b/c/d to absolute pathnames need only
>recognize that /mnt/a and /mnt/b are identical to know that a loop is being
>created.
...
>It is quite possible to detect the creation of loops in the filesystem...

I certainly wasn't claiming it wasn't computable to determine whether or not a
new link caused a loop in the filesystem.

However, the original article's method was faulty as I illustrated.  "/mnt/a"
is certainly an absolute pathname, even if due to abnormalities in the
filesystem it is not canonical in some sense.  Once absolute pathnames were
obtained, it advised using strncmp(), which certainly doesn't think that /mnt/a
is an initial string of /mnt/b/c/d.

Do you have a method for detecting loops which doesn't involve searching the
whole i-list or the whole file graph?

peter@ficc.ferranti.com (Peter da Silva) (07/26/90)

In article <1990Jul25.115628.6385@jarvis.csri.toronto.edu> flaps@dgp.toronto.edu (Alan J Rosenthal) writes:
> Do you have a method for detecting loops which doesn't involve searching the
> whole i-list or the whole file graph?

By explicitly examining ".." you can use the standard "pwd" algorithm to
derive a canonical name for a directory. You can then compare prefixes
to determine whether a loop exists, but it would be more efficient to
check for each directory's inode while in the process of deriving the other's
canonical path.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
<peter@ficc.ferranti.com>