[comp.society.futures] Musings on the future of computing.

bzs@world.std.com (Barry Shein) (02/07/91)

[This seems to have not made it out to USENET so I am reposting -bzs]


Two competing models of computing are currently polarizing development
directions in this field. One is characterized by large, centralized
servers with minimal power directly on the desktop (the "X-terminal"
model.) The other is characterized by the "killer micros" wherein
everyone has roughly equivalent and competent compute power on the
desk.

Each model has its proponents and its rationales.

At one extreme are projects such as Plan 9 at Bell Labs which utilizes
essentially X-terminals (although they don't run X, but devices
capable of running little else but a window system) and a system of
servers (compute, file, etc.)

The rationales for such an approach are:

	A) People should not (and do not want to) waste their
	time administering the system on their desktop. Powerful,
	self-reliant desktop systems are complicated and require
	constant care and attention. Efforts to eliminate or simplify
	this administration are futile. Mere software version updates
	can be a nightmare.

	B) Most cycles within an organization (and hence, most of
	the cost) is wasted if power is put onto the desktop. The
	assumption here is that one can generally not utilize the
	idle power on someone else's system as effectively as they
	could a centralized server with the sum of many machines'
	power available in one large system.

	C) The complicated hardware spread across desktops is
	expensive and difficult to maintain.

	D) There are economies of scale available in centralized
	systems, such as large, centralized information repositories
	(eg. phone books, maps, textual materials etc.)

The other extreme is typified by the high-end workstation enviroment,
providing near-mainframe CPU performance and hundreds if not thousands
of MB of disk on every desk top.

The rationales for this approach are:

	A) Predictability of response, one is not captive to
	the whims of other users who might find a way to make
	life on a shared system miserable by constantly running
	large jobs.

	B) Distribution of political control: Centralized resources
	tend to build centralized control with all its political
	foibles. Usage becomes subject to least common denominator
	rules if resources become scarce in any way and these rules
	rarely take into account the actual nature of the job at
	hand being attempted by the individual user. Worse, centralized
	systems tend to breed a certain amount of pettiness and paranoia
	where real or imagined infractions are punished by denial of
	access in an attempt to maintain respect for "the rules".
	Yet worse, influential but corrupt or misguided administrators
	can easily make one's life hell by manipulating centralized
	resource allocations.

	C) Purchasing decisions become centralized and often
	devolve into reflecting ease of maintenance and administration
	rather than work goals. For example, hooking image scanners
	to centralized systems is often very cumbersome and tends
	to be avoided whereas hooking the same devices to desktop
	systems is almost trivial and any inconveniences (e.g. if
	the scanner causes the system to hang or crash occasionally)
	are borne by the owner of the system and not subject to
	group consensus.

	D) If the centralized system is unavailable then many
	people would be idled. In a more distributed system then
	fewer people are idled due to a machine becoming unavailable,
	often just the person who's machine has died.

One observation is that centralized systems tend to be information
oriented while decentralized systems tend to be compute oriented.

If the goal is to provide wide access to large bodies of information
then the centralized scheme tends to be attractive. If the goal is to
provide everyone with access to reliable amounts of computing power,
then decentralized schemest tend to be preferred.

Thoughts and expansion appreciated.

-- 
        -Barry Shein

Software Tool & Die    | bzs@world.std.com          | uunet!world!bzs
Purveyors to the Trade | Voice: 617-739-0202        | Login: 617-739-WRLD

speyer@joy.cad.mcc.com (Bruce Speyer) (02/07/91)

In article <1991Feb6.165031.14655@world.std.com> bzs@world.std.com (Barry Shein) writes:
>... Thoughts and expansion appreciated.

I haven't been following this group but I happenned to catch Dan Yurman's
article (message id: <9102031827.AA72323@gemstone.inel.gov>) today which I
really liked.  Personally, I would like to see more detailed future visions for
various application areas.  Also, I presume discussions have been taking place
concerning the technology behind these visions.

Concerning the two computational models for the future: Obviously, there are
benefits and detriments to the two different models. Most computing
instantiations will be a hybrid.  However, I dispute the two major arguments
presented in support of the model with large compute servers and desktop
terminals: 1) user-administration a) is easier in this model and b) is a major
issue; and 2) the general user wouldn't be able to use the compute power of a
server on their desktop anyway.

First, we are talking about future computing models not today's state of the
art.  I would be greatly disappointed and surprised if system administration,
along with general OS development, did not make great advances in the next 10
years.  I think the issue of user-level administration is a minimal one.

Secondly, I believe the users of the future will be able to use a great deal of
local computes, perhaps dedicated to special purposes, in the future.  It seems
that we are moving away from the general application paradigm of today which is
just applying tools to tasks to one which integrates the dynamic component
(process) of the application with the static portion of the environment (data).

What this means is that (I know this is way too brief but it all I have time
for) is that as more and more of the process within an enterprise is considered
which itself accesses more and more of the information infrastructure within
the enterprise the future of computing will move away from the simple tool to
task paradigm and rapidly move toward a model of continuously simulating the
enterprise. [in fact, there are many limitied examples of this more advanced
paradigm available today] (The term enterprise is being used here to refer to a
coherent, interconnected organization or entity.  At the highest level today
something like the military-industrial complex is an enterprise which is made
up of many other, perhaps overlapping, enterprises which are composed...)

I say simulation will be the norm because enterprises operate continuously not
discretely (like a tool being applied to a task).  If future computing is
dedicated to supporting the infrastructure and operation of the enterprise then
it must model and compute the enterprise in tandem with the real organization.

I know my point may be confusing so I will try to illustrate.  Take a jet pilot
in an advanced fighter.  Is he/she flying the plane or manipulating the
simulation which controls the plane?  Seems to be that both realities are
taking place at the same time and the pilot is interacting in both
simultaneously.  They see and react to what is physically happenning and also
with the simulator (command and control).

Likewise, I see the future user interacting with the simulation of the
enterprise at the same time they are dealing with the physical organization.
This is going to take a lot of computes to support and link the two (or more)
realities.  Furthermore, each user defines and manipulates their own view(s) of
the enterprise which means all these different layers of abstraction must be
managed in a consistent (and often real-time) fashion.

In order to do effect this interaction the user must be able to generate
stimulus and receive stimulus.  We can imagine this stimulius being advanced
visualization graphics, speech recognition, etc. in order to make all of these
layers of abstraction comprehensible to the normal human.  The terminal compute
model scenario can not begin to support these type of real-time requirements.

You can also imagine the user running many different scenarios through the
enterprise simulator: what scenarios and interactions were used to arrive at
this current state of the enterprise? are there other valid alternate states
available to exercise? what would be the impact of changing one of the
processes in the enterprise? constructing a new view of the enterprise? etc.
Again this is extremely (both local and distributed) compute intensive.

Concluding back to the original question: certainly real-time computer response
requires local dedicated computes; complex 3d visualization and animation could
easily use the current state of the art million-plus vectors or several hundred
thousand polygons per second capability which today is only found on the most
advanced graphics hardware. Also, if my enterprise simulation premise holds
then most users will be doing real-time, continuous, complex simulations.
Either there must be serious local computes or else there will need to be at
least a fairly high ratio of servers for a given number of users.

You did say expansion welcomed. :-)
	Bruce Speyer / MCC CAD Program      EMail: speyer@mcc.com
	3500 West Balcones Center Drive     Phone: [512] 338-3668
	Austin, TX. 78759                   Fax:   [512] 338-3897

janssen@parc.xerox.com (Bill Janssen) (02/07/91)

(the phrase "simulating the enterprise" immediately reminded me of MCC... :-)

I think the future is more intricate than we might imagine.  My model
of 20 years from now is that every computer is linked to every other,
via various networks.  Programs as such don't exist to the user, but
rather the world is shown as a collection of objects and services.
All the objects (people, files, machines, databases) are "actors" in
that they are "continuously" animated -- one can never contact one
without finding it "ready to run".  All objects are theoretically
accessible to each other (though in reality objects may exist within
domains which are opaque to other objects, so that messages can't get
through).  Computations take place on whatever machine has the
appropriate data and cycles to spare, so that one transaction might
actually be computed on your desktop (for a simple request of an
object which one has cached), and the next in France (it being simpler
to return the "number-of-pages" in the "City-Statutes-of-Dijon" object
than to return a full copy so that you can count the pages locally).
Similarly, services might take place locally, or might be forwarded to
a more appropriate computational nexus.

We also have to take into account the fact that in the next 10 years,
single-CPU machines will disappear.  Specialized graphics controllers
(possibly multi-CPU) will be the standard; symmetric multiprocessing
of the "user" computation; specialized compression and protocol chips
for the net and I/O interfaces.  The year 2001 equivalent of the 80386
will have from 2 to 16 "standard" processors, along with some number
of graphics and I/O processors (including DMA).  Computational power
may become more of a commodity, like electricity.  People will add
capability in kilo-MIP :-) chunks -- which may be simply an add-in
board for your PC, or a whole new box.  Add to this the notions being
experimented with in OS's of process migration (and the even more
interesting research being done on processes "bidding" for unused
cycles in a computation market), and the lines drawn between "server"
and "workstation" become too blurred to read.

On the whole, I agree with Bruce about the continuous computation.
Millions of objects will be continuously interacting with each other
to form what we will think of as the computational society.
Simulations of simplified models of this society will be essential to
all service-providers in it -- and this means your local print server
as well as businesses.  Users will interact with the "real
(computational) world" through some simulation/abstraction of its
real, incredibly rich and dynamic, state.  One will ask for a
"document" object to be "printed", which will trigger a planning task
which will in turn:

o  consider the type of the document

o  printing formats into which it might be rendered,

o  your probable location,

o  what kind of priority/urgency is probably meant,

o  what "print-service-providers" you have "frequent-print-user" bonus
   arrangements with :-)

o  whether some messenger service could get from the local Kinko's in
   reasonable time (because you like the document quality from that
   print-service-provider)

o  whether the local LaserWriter is out of paper (and if anyone is
   perceived as "in-the-act-of-reloading-the-paper-tray")

and so on and so forth to build and execute a custom system all
designed to give one a hardcopy form of the document.  During the
process, it will send messages to many other documents, which may in
turn build and execute complex systems to respond appropriately to
those messages.

If you think this model is overblown, consider what happens when you
say "ftp <somehost>" to your shell...

Bill
--
 Bill Janssen        janssen@parc.xerox.com      (415) 494-4763
 Xerox Palo Alto Research Center
 3333 Coyote Hill Road, Palo Alto, California   94304

peter@taronga.hackercorp.com (Peter da Silva) (02/08/91)

In article <1991Feb6.165031.14655@world.std.com>, bzs@world.std.com (Barry Shein) writes:
> At one extreme are projects such as Plan 9 at Bell Labs which utilizes
> essentially X-terminals (although they don't run X, but devices
> capable of running little else but a window system) and a system of
> servers (compute, file, etc.)

No, the GNOT terminals in Plan 9 *are* almost diskless workstations,
and can do work locally. But because they're diskless they don't need
to be maintained and administered.
-- 
               (peter@taronga.uucp.ferranti.com)
   `-_-'
    'U`

MACALLSTR@vax1.physics.oxford.ac.uk (02/08/91)

I believe the concept of X-Windows terminals served by powerful servers is
 a 'dead duck' and would be surprised if this were the way forward.
The way forward, I believe, is for processing power on the desk, with a
 windowing system ( X-Windows client-server running on the desk ) and with
 local storage and, perhaps, when price/reliability permits personal tape/disk
 drives and printers. The way forward must be for the computer-on-the desk to
 be independent of any central machine for day-to-day running i.e. it will
 only be down if the user switches it off or if there's a fault.
However, users will not want to be involved with management (backups,etc) and
 software updates. This will be done, automatically, across the network. There
 will still be a requirement for a computing service in a mangement/advisory
 role.
I also believe that we'll see many portable turn-key systems in specialist areas
 as soon as hardware,etc becomes sufficiently compact and portable: in much
 the same way as we have calculators at the moment.

Summary: the way forward for the immediate future.
         
          Personal stand-alone systems with fast network connections.
          User has no involvement in system management.
          System management is performed accross the network, automatically,
            as required.
          Compact turn-key systems in specialist areas.


Just some personal views,
 John


                            +-------------------+
                            |   ( )       ( )   |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Name           : John B. Macallister                                        |
| Postal address : Nuclear Physics Laboratory, Keble Road, Oxford OX1 3RH, UK.|
| Telephone      : +44-865-273388 (direct) +44-865-273333 (tel receoption)    |
| Telex          : 83295 NUCLOX G                                             |
| Fax            : +44-865-273418                                             |
| JANET          : MACALLSTR @ UK.AC.OX.PH                                    |
|                : UK.AC.OX.PH is DTE 000050250060 on the UK JANET network    |
|                : UK.AC.OX.PH is DTE 204334505211 on IXI                     |
| UUCP/USENET    : MACALLSTR @ oxphv1                                         |
| HEPNET/SPAN    : MACALLSTR @ OXPHV1                                         |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+