[comp.databases] Server versus Distributed database?

markh@rtech.UUCP (Mark Hanner) (06/26/87)

In article <2878@blia.BLI.COM> version B 2.10.2 9/18/84; site rtech.UUCP rtech!mtxinu!unisoft!lll-lcc!pyramid!voder!blia!billc billc@blia.BLI.COM (Bill Coffin) writes:
>
>Anyway, servers and distributed dbms's are not necessarily mutually
>exclusive.  The classic distrib model has single machines in separate
>cities (this is the model shown on the teacher's blackboard when
>"distributed databases" is the day's topic).  This is probably
>unrealistic; nodes on a distributed system could include several
>dissimilar LANs connected by long-haul lines and/or gateways. 
>A LAN is a great place for a server.  A distributed dbms could 
>be built on top of this model, treating the whole LAN, via its server, 
>as a single node in the distributed dbms.
>

i would go so far as to say that the Server and Distributed Database
are BOTH necessary. ingres/star and sybase both are designed around 
the concept of multiple autonomous servers which have a distributed
layer on top of them to provide transparency. if you have one of the
types of applications which are better for a server (very high 
transaction rates for example), you can still layer the distributed
access on top of it. 

the distributed model also provides an alternative to having to buy
increasingly large central server machines for very large applications.
if i can buy N vax 8800s for the same price as an IBM 3090 (N whatever 
is required to get the same number of mips), i can still run a banking 
application by spreading the load across the processors--i may even 
find that N 8800s are cheaper. you can even select the server that 
meets the needs of particular applications, and still be able to 
consolidate organization-wide data. [this is what banks really do:
both in consolidating data from multiple systems (those point-of-sale
systems that you see in grocery stores and gas stations debit your 
account as a batch job--not in real time), off-loading front end 
processing (teller machines are sophisticated computers--can you 
imagine every keystroke sending an interrupt to a central processor?)]

bill earlier also mentioned the problem of heterogeneity. there are 
actually two heterogeneity. dissimilar networks (tcp/ip, decnet, 
sna, etc) require gateways, but a properly designed server will
be able to bridge heterogeneous hardware transparently with the help of
the network vendors who are jumping to build these gateways.
the more difficult problem is dbms gateways to non-relational systems.
relational servers work best if they communicate using only sql and
data, but sql is, for more complex querying, very difficult to translate
into a language such as dl/1. there will always be limitations in this
area (IMS will certainly last as long as cobol has :-) no flames from
cobol hacks, please!), but something is better than nothing at all.

one of the main factors driving heterogenous distributed systems is that
most organizations already are heterogeneous, and as they consider the
alternatives, a software solution which leaves the existing hardware
intact is far cheaper than throwing it out and "getting a bigger server"
custom servers can then be added later if particular applications
demand them, and the software distributed database will be able to 
include it through a gateway.

cheers,
mark

author's plug: for more information on distributed database, read my
article in the may unix review (jim gray's article is in the same
issue).
-- 
markh@rtech.ARPA
ucbvax!mtxinu!rtech!markh
"someone else was using my login to express the above opinions..."