[comp.org.eff.talk] Software vendor liability/culpability

earle@elroy.jpl.nasa.gov (Greg Earle (Sun Software)) (05/31/91)

[Apologies if these groups are not the right place for this kind of question.]
[This might be better for comp.risks I suppose, I'm not sure.]

Consider the following hypothetical situation:

A software vendor, XYZ Inc., sells a "network" software product.

The software product itself requires the ability to broadcast on the network to
perform part of its function.  The product could be anything which this feature
would be useful for, e.g., a multi-user chat program, a network game (a la the
X11 version of "mazewar"), or whathaveyou.  Several vendors of BSD UNIX based
systems offer an interface to accessing network interfaces directly, usually
via a driver and an associated device in /dev.  Since the ability to read and
write to the network directly can be considered a major security hole, under
normal circumstances, access to such a device is restricted to the super-user.

Let's say that the software does not do the "correct" thing, "correct" in this
case being a setuid root program that opens the network device, and then
immediately resets itself to the uid/gid of the user running the software (via
setreuid() and setregid() under BSD UNIX).  Let's say instead that the
program's installation script, if installed by "root", were to instead do
something like silently chmod the network device to mode 666 or 777, making it
world-readable and world-writable for all users.

Continuing the scenario, consider the following.  A user on a system which has
had this software installed now discovers that the network device is readable.
Using a program to display packet traffic, such as "tcpdump", this unscrupulous
user then takes advantage of the newly created security hole to snoop on the
network, and in the process s/he obtains a password for another user on another
machine as it goes by, perhaps in an FTP or Telnet packet.  S/he then uses this
information to break into the other system using the snooped user's name and
password, and proceeds to delete all of the user's files on the remote machine.

Taking the simplistic route, let's assume then that the violated user discovers
the infiltration, and the sys admin traces the invasion back to the originating
machine.  Contacting the sys admin on the cracker machine, they quickly narrow
down the candidates, and the unscrupulous snooper is discovered.  Upon
questioning, the snooper admits to having gotten the password by snooping on
the net, due to the network device being world-readable.  Eventually, the sys
admin miraculously determines that the normally -rw------- device was changed
at the same time as XYZ's software product was installed.  The sys admin then
looks at the installation script and discovers the modification of the device,
which allowed the cracker to gain access to the Ethernet which would not
normally be possible.

The bottom line: in such a circumstance, is company XYZ liable for damages
caused as a direct/indirect result of the security hole opened due to the
installation of their product?  Or is it a case of "If you don't read the
installation script of all products you install, then you get what you deserve"
for the sys admin of the cracker system?  In general, is a software vendor
liable/responsible for anything deletirious that occurs as a byproduct of the
installation of their product(s) on a customer's machine?

[Please followup to the newsgroups rather than replying by e-mail.  Thanks.]

-- 
	Greg Earle			earle@Sun.COM
	Sun Microsystems		earle@mahendo.JPL.NASA.GOV
	JPL on-site Software Support	poseur!earle@elroy.JPL.NASA.GOV

de5@ornl.gov (Dave Sill) (05/31/91)

In article <1991May31.073704.4847@elroy.jpl.nasa.gov>, earle@elroy.jpl.nasa.gov (Greg Earle (Sun Software)) writes:
>
>The bottom line: in such a circumstance, is company XYZ liable for damages
>caused as a direct/indirect result of the security hole opened due to the
>installation of their product?

Yes, unless they have taken reasonable action to notify the installer
of potentially harmful side effects.

>Or is it a case of "If you don't read the
>installation script of all products you install, then you get what
>you deserve" 
>for the sys admin of the cracker system?  In general, is a software vendor
>liable/responsible for anything deletirious that occurs as a byproduct of the
>installation of their product(s) on a customer's machine?

Yes, if the vendor provides a script or installation instructions,
they're responsible for making resonably sure that they're safe.

-- 
Dave Sill (de5@ornl.gov)	  It will be a great day when our schools have
Martin Marietta Energy Systems    all the money they need and the Air Force
Workstation Support               has to hold a bake sale to buy a new bomber.

barmar@think.com (Barry Margolin) (06/01/91)

In article <1991May31.132152.10113@cs.utk.edu> Dave Sill <de5@ornl.gov> writes:
>In article <1991May31.073704.4847@elroy.jpl.nasa.gov>, earle@elroy.jpl.nasa.gov (Greg Earle (Sun Software)) writes:
>>
>>The bottom line: in such a circumstance, is company XYZ liable for damages
>>caused as a direct/indirect result of the security hole opened due to the
>>installation of their product?
>
>Yes, unless they have taken reasonable action to notify the installer
>of potentially harmful side effects.

Intuitively, this seems correct.  I'm not sure if it's true under the law,
though (take my comments with a grain of salt, as I'm not a lawyer).  Much
software comes with warranties that disclaim liability for damages due to
use of the product.  Often, the best they will warrant is that the software
behaves as specified in the documentation; unless the documentation says
that the software *doesn't* change the protection on security-relevant
files, they can claim that this behavior is in spec.

On the other hand, there are many "implied" warranties that are often in
force.  The customer could probably claim that they assume that software
does not intentionally go around opening huge security holes without
mentioning it in the documentation.  In other words, the vendor is expected
to be reasonable.
-- 
Barry Margolin, Thinking Machines Corp.

barmar@think.com
{uunet,harvard}!think!barmar

zane@ddsw1.MCS.COM (Sameer Parekh) (06/06/91)

	Looks to me like the liability lies with the unscrupulous user.  It
was this user's choice to snoop the network and take the password and delete
the other person's files.  The company is guilty of irresponsible
programming, but unless the program itself found the pw, then the program
holds no liability.  (Just a VERY bad reputation.)  (And the sysadmin of the
site holds no liability either.)
-- 
The Ravings of the Insane Maniac Sameer Parekh -- zane@ddsw1.MCS.COM

ts@cup.portal.com (Tim W Smith) (06/08/91)

1) What were unencrypted passwords doing on the network?

2) Could the vendor of the network software that unprotected the /dev
node argue that on a system with properly designed security, such a lack
of protection would cause no problems, and so the fault lies with either
the designers of the operating system's network code, because they blew
the security design, or the people who selected that operating system
for this installation, because they selected a system without making sure
it had a good security system.  In other words, on a properly designed
system, the network software would not have caused any damage by making
the /dev file readable, so it is not their fault.

						Tim Smith

barmar@think.com (Barry Margolin) (06/09/91)

In article <43086@cup.portal.com> ts@cup.portal.com (Tim W Smith) writes:
>1) What were unencrypted passwords doing on the network?

The currently standard remote login and file transfer programs do not have
any other authentication mechanisms beside passwords.  And it doesn't
matter whether they are encrypted or not -- encrypted passwords can be
captured and played back just as easily as plaintext.  You need a system
like Kerberos, or a one-time code (we use a system from Security Dynamics
that depends on a smartcard) to get around this problem.

>2) Could the vendor of the network software that unprotected the /dev
>node argue that on a system with properly designed security, such a lack
>of protection would cause no problems

Not likely.  If the security of the system is dependent upon correct
protection on certain devices, and the network software intentionally
changes this protection, it is clearly disabling the security.  The system
was reasonably secure when operated according to the instructions, but this
software violates those instructions.

-- 
Barry Margolin, Thinking Machines Corp.

barmar@think.com
{uunet,harvard}!think!barmar

ts@cup.portal.com (Tim W Smith) (06/10/91)

>>2) Could the vendor of the network software that unprotected the /dev
>>node argue that on a system with properly designed security, such a lack
>>of protection would cause no problems
>
>Not likely.  If the security of the system is dependent upon correct
>protection on certain devices, and the network software intentionally
>changes this protection, it is clearly disabling the security.  The system
>was reasonably secure when operated according to the instructions, but this
>software violates those instructions.

The original poster said something like "some BSD based systems" when
talking about the /dev entry.  I don't know enough about BSD networking
to know if unprotecting the /dev entry would cause a problem on *all* BSD
based systems.  Would it?

On System V (or, at least, the SCO version of System V when using the
Lachman implementation of TCP/IP), I don't think that there would be
a problem, because the streams driver for the network card determines
what stream to send an incoming packet to based on the packet type.
The TCP/IP software should already have a stream opened to the driver
for all IP packets, so someone coming in later would not be able to
grab these.  Do any BSD systems work like this?  If so, the vendor might
be able to argue that a particular system that does not behave like this
is at fault.

(I think the network software vendor should be strung up by their
 tranceivers, but since I plan to go to law school in a couple years,
 I figure I should practice arguing for the side I don't agree with,
 which is why I keep trying to come up with ways for them to squirm
 out of liability! :-) )

						Tim Smith

barmar@think.com (Barry Margolin) (06/11/91)

In article <43140@cup.portal.com> ts@cup.portal.com (Tim W Smith) writes:
>The original poster said something like "some BSD based systems" when
>talking about the /dev entry.  I don't know enough about BSD networking
>to know if unprotecting the /dev entry would cause a problem on *all* BSD
>based systems.  Would it?

BSD networking doesn't use /dev at all.  I thought the original posting was
a hypothetical question about a general class of possible problems.

>On System V (or, at least, the SCO version of System V when using the
>Lachman implementation of TCP/IP), I don't think that there would be
>a problem, because the streams driver for the network card determines
>what stream to send an incoming packet to based on the packet type.
>The TCP/IP software should already have a stream opened to the driver
>for all IP packets, so someone coming in later would not be able to
>grab these.  

Some Unix networking implementations provide a way for a suitably
privileged process to view packets going through the network driver without
affecting whether they are received by the intended process (Sun's
etherfind and the publically-available tcpdump make use of this facility to
turn a Unix box into a network monitor).  For networks implemented through
a /dev entry, I could easily imagine this being done by opening the device
and then doing an appropriate ioctl(), and the ability to do this might be
controlled by the protection on the device file.

>	      Do any BSD systems work like this?  If so, the vendor might
>be able to argue that a particular system that does not behave like this
>is at fault.

A vendor who claims their software works on a particular system can't claim
after the fact that the system "should" behave a certain way.  If they've
qualified their software for that system, then it should be designed for
the way that system *does* behave, not how they would like it to behave.
-- 
Barry Margolin, Thinking Machines Corp.

barmar@think.com
{uunet,harvard}!think!barmar