[comp.databases] Any TUXEDO users here?

denap@alta.sw.stratus.com (Tom DeNapoli) (01/11/91)

Since there's no comp.oltp or comp.unix.oltp I figure this is the
next best place...

I'd like to hear from any TUXEDO users out there, specifically
/T.  What I'm looking for is info on ease of use, administration,
API and your general satisfaction.  Sorry to be so general in my
request.  I'm doing an apples to oranges comparison and the more
info I get the better.

--
  Tom DeNapoli              | Stratus Computer, Inc.
  denap@alta.sw.stratus.com | 55 Fairbanks Blvd M23EN3
  uunet!lectroid!alta!denap | Marlboro, MA 01752

denap@alta.sw.stratus.com (Tom DeNapoli) (01/17/91)

I haven't had as much response to this as I had hoped.  

3 me too's for any info I got.
1 response from ITI. (mostly a who we are/what we do message)
  an offer of additional info via phone.  which I may have to
  pursue do to the lack of posted/emailed user experiences.
1 response from a Tux developer.  
  also with an offer of additional info.

I was after any end-user comments or some details of what holes
ITI filled.  Maybe the intersection of this board and those users
is NIL.  

-Tom DeNapoli

--
  Tom DeNapoli              | Stratus Computer, Inc.
  denap@alta.sw.stratus.com | 55 Fairbanks Blvd M23EN3
  uunet!lectroid!alta!denap | Marlboro, MA 01752

dhepner@hpcuhc.cup.hp.com (Dan Hepner) (01/17/91)

>dbmoore@oracle.uucp (Dennis Moore)
>>denap@alta.sw.stratus.com (Tom DeNapoli) writes:

>>Since there's no comp.oltp or comp.unix.oltp I figure this is the
>>next best place...

Maybe there should be. But as yet the OLTP volume hasn't threatened
comp.databases.  Maybe that will change.

>>I'd like to hear from any TUXEDO users out there, specifically
>>/T.  What I'm looking for is info on ease of use, administration,
>>API and your general satisfaction.  Sorry to be so general in my
>>request.  I'm doing an apples to oranges comparison and the more
>>info I get the better.

The first relevant fact is that _Unix_ transaction monitors in
general, and Tuxedo/T in particular, have only begun to penetrate
the Unix OLTP market.  Thus, the base of end user commercial customers
today is somewhat limited.

>There is a company called Independence Technologies Inc. (ITI) which has made
>a commercial version of Tuxedo/T which addresses some deficiencies of the
>original.  Hopefully someone from ITI (Kurt?  Are you out there?) will
>describe their experiences with Tuxedo and perhaps even describe their product
>a little.
>-- Dennis Moore, my own opinions, etcetcetc

Data points.  The combination product ITI, Tuxedo/T 4.1, and Oracle will
be available on HP's series 800 systems very soon (1Q91).  The same
combination with Informix will be available 2Q91.  There are also
plans to provide ITI/Tuxedo on the HP 9000-1240 Fault Tolerant computer.

ITI's phone number, which might be a reasonable source of info about
Tuxedo/T, particularly on those platforms which they have ported
to, is (415) 438-2000.  AT&T, who developed Tuxedo, also actively
markets the product and may well provide customer references.

Dan Hepner
Not a statement of the Hewlett Packard Company.

miket@blia.sharebase.com (Mike Tossy) (01/18/91)

In article <2060007@hpcuhc.cup.hp.com>, dhepner@hpcuhc.cup.hp.com (Dan Hepner) writes:
> The first relevant fact is that _Unix_ transaction monitors in
> general, and Tuxedo/T in particular, have only begun to penetrate
> the Unix OLTP market.  Thus, the base of end user commercial customers
> today is somewhat limited.

Since they have only begun to penetrate the market, perhaps someone will be
good enough to explain what they are and when they are useful? 

--
Mike Tossy			ShareBase Corporation (part of Teradata)
miket@blia.bli.com		(used to be Britton Lee)

dafuller@sequent.UUCP (David Fuller) (01/18/91)

In article <13333@blia.sharebase.com> miket@blia.sharebase.com (Mike Tossy) writes:
>In article <2060007@hpcuhc.cup.hp.com>, dhepner@hpcuhc.cup.hp.com (Dan Hepner) writes:
>> The first relevant fact is that _Unix_ transaction monitors in
>> general, and Tuxedo/T in particular, have only begun to penetrate
>> the Unix OLTP market.  Thus, the base of end user commercial customers
>> today is somewhat limited.
>
>Since they have only begun to penetrate the market, perhaps someone will be
>good enough to explain what they are and when they are useful? 

The wizzened IBM or Tandem hand would claim that the assertion of "recent
market penetration" is equivalent to "UNIX market immaturity".

Assuming this is not a marketing tease by our friends at ShareBase,
I would like to give y'all a brief talk about TP Monitors.

Waggish view: It's a cheap trick to save RAM and eliminate the contention and
resource consumption implicated by a 1:1 relationship between users and 
their database servers.  

Realism in previous proprietary environments: Pragmatic view of DBMS 
environments; akin to the invention of the statistical multiplexer.

Desired functionality: Geographic independence of transaction service and
the application's ignorance of response time requirements, geographic
actualities, and atomic update of the state of some customer's business.

If you look at the first-brush attempts by UNIX developers to get OLTP
functionality driven onto UNIX, you see that each application process is
tied to a back end which is responsible for DBMS service.  

This is bad.  Tandem proved it was bad.  IBM knows it is bad.  The fact
is that if you have 1 backend associated with a frontend, the backend is
not utilized very well, and from the point of view of the backend,
reserving vital resources requires far too many cycles because there are  
usually a lot of 1:1 backends that lay idle.  You waste time inventing
protocols that roster the "don't care" conditions.

If you inject a layer between the front and back ends, you can assert
whatever semantic benefits you need; that you do not need to coordinate
an n:n environment and you can also benefit from the reduced worst case
locking assumptions because you're asserting the n:m relationship 
between front and back ends.  

If you are smart you can let the front end do inquiries against a large
shared memory buffer (on an SMP machine) knowing that because the tables
you reference have a fixed update schedule, locking is not a problem; 
let the LRU algorithm of the CPU decide what pages are hot.  

The bottom line is pragmatic conservation of resource in today's OLTP
environment for UNIX.  For Tandem, it is the irrelevance of locality of
both data and processor.  For both it is the ability to establish rational
constraints between the functional requirements of users and coherent
DBMS.
>
>--
>Mike Tossy			ShareBase Corporation (part of Teradata)
>miket@blia.bli.com		(used to be Britton Lee)


-- 
Dave Fuller				   
Sequent Computer Systems		  Think of this as the hyper-signature.
(708) 318-0050 (humans)			  It means all things to all people.
dafuller@sequent.com

dafuller@sequent.UUCP (David Fuller) (01/18/91)

did not mean to replicate a paragraph; my fingers did the stumbling.
-- 
Dave Fuller				   
Sequent Computer Systems		  Think of this as the hyper-signature.
(708) 318-0050 (humans)			  It means all things to all people.
dafuller@sequent.com

Jim_Troester@TRANSARC.COM (01/22/91)

> In article <2060007@hpcuhc.cup.hp.com>, dhepner@hpcuhc.cup.hp.com
> (Dan Hepner) writes:
>> The first relevant fact is that _Unix_ transaction monitors in
>> general, and Tuxedo/T in particular, have only begun to penetrate
>> the Unix OLTP market.  Thus, the base of end user commercial customers
>> today is somewhat limited.

In response miket@blia.sharebase.com (Mike Tossy) says:
> Since they have only begun to penetrate the market, perhaps someone will be
> good enough to explain what they are and when they are useful?

> Mike Tossy                  ShareBase Corporation (part of Teradata)
> miket@blia.bli.com           (used to be Britton Lee)

An informative overview on TP Monitors recently appeared in November
1990/Vol.33, No.11/ _Communications of the ACM_.  The 12 page article
starts on page 75 and is titled Transaction Processing Monitors by
Philip A. Bernstein.

+------------------------------------------------------------------+
| James Troester     troester@transarc.com         (412) 338-4469  |
| Transarc Corp.,     707 Grant Street,     Pittsburgh, PA  15219  |
|            All statements are mine and not Transarc's            |
+==================================================================+

dhepner@hpcuhc.cup.hp.com (Dan Hepner) (01/22/91)

From: dafuller@sequent.UUCP (David Fuller)

>>Since they have only begun to penetrate the market, perhaps someone will be
>>good enough to explain what they are and when they are useful? 

>The wizzened IBM or Tandem hand would claim that the assertion of "recent
>market penetration" is equivalent to "UNIX market immaturity".
>Dave Fuller				   

Great posting, Dave Fuller.

There's good reasons why TP monitors have not penetrated the
Unix market to the extent that they have penetrated the mainframe
market. It's interesting to note that several/most of the problems which
drove the original development of CICS (the most significant IBM monitor)
simply don't exist on Unix.   CICS came into being in an environment
where timesharing wouldn't be available for years to come, where
the computing paradigm was a batch job which took over the entire
computer and ran to completion, and then the next one began.  In
order to allow for OLTP, some mechanism was needed to have
a "single, perpetual batch job which interacted with lots of terminals".

In that long gone world, you got one process.  Period.  And while 
modernization has made this no longer the case, what has remained is 
an accepted axiom that processes are expensive.

Unix came along with the concept, if not always the reality, of very 
cheap processes.  Utilizing that concept, it is very easy to address the
vast majority of the problems which have been addressed by mainframe
TP monitors.  I.e. getting input from many, keeping track of the context
of each user, The wonder drug of operating systems.

Unfortunately, the concept meets reality at some number of processes.
Go over that line, and the cost of maintaining all of those usually
idle processes will consume an unacceptable percentage of your OLTP 
machine, particularly when compared to doing the same work using a
few processes.  Thus, we find that the current Unix OLTP paradigm is 
quite good so long as one only intends to address the needs of "small" 
number of users applications.

Dan Hepner

brian@edat.UUCP (brian douglass personal account) (01/22/91)

In article <DENAP.91Jan16125351@alta.sw.stratus.com> denap@alta.sw.stratus.com (Tom DeNapoli) writes:
>
>I haven't had as much response to this as I had hoped.  
>

Well, I'll chime in here a bit.

I've been studying ITI and Tuxedo for about the last 6 months for a 
new product my company is developing.  I've been having regular 
conversation with the folks at ITI and ATT USL about Tuxedo to 
determine what it is and what it isn't.

Tuxedo is a suite of products designed to implement ISO's 
Distributed Transaction Processing Model 10026.x.  There are
currently 3 main components: The Front End Application; The
Transaction Monitor; and The Resource Manager (typically a DBMS,
but not limited as such).  The RM talks to the TM through the XA
interface as defined by X/Open.  If a product is XA complient, it
should be able to talk to the TM.  I am not sure if the FE must
also be XA compliant to talk to the TM.  Tuxedo has /T product
that implements the TM portion.  That also have /D with implements
the RM portion.  I understand Oracle has an XA interface currently
working, and Informix is rumored to have one in the wings.  This
makes sense since Informix is ATT's product of choice.  /T and /D
are sold separately.  /T also comes with a FE generator to create
screen and your application.

A point of contact for Tuxedo at ATT is Glenn Rose at 201-522-6477.
I believe Glenn is Product Manager for Tuxedo (and he'll probably
kill me giving out his phone number, so please be judicious in your
calling).  Tuxedo is a large and sophisticated product, but looks
ideal for OLTP work.  I received a stack of docs about 6 inches
high for the whole system.  So far so good.  

Independence Technologies' ITran system is a superset of Tuxedo.
They add in an X-window FE development system that generates C++
code.  Also, they built some kind of Transaction Definition
Language that allows you to easily build transactions on-the-fly.
As it was described to me, first you build your database.  Then you
build a transaction and describe the components with a mouse-driven
screen.  Once the transaction is defined, that system has already
created the FE portion.  An example would be to build a DEPOSIT
transaction.  You define the necessary fields for the trans, and
then their manipulations.  Compile, and presto! you have the
appropriate entry screen, plus the trans in Tuxedo that could
receive the same trans from an ATM.

As I have heard it, ITran was originally developed as a joint
venture with ITI, Pyramid, and Oracle, these later two footing the
bill.  There was some type of falling out, and ITI has begun to
port to other products.  It now runs on the Tandem S2 fault
tolerant computer.  Also why Informix should be available soon.
Regardless of the Corporate problems, they're engineering staff is
supposed to be top notch and to have done a superior job developing
the add-ons to Tuxedo.  They are definetly worth a close look.

ITI can be reached at 415-438-2027.

I have no relationships with ATT or ITI.  I'm just a developer
trying to find the right tools to develop a killer OLTP system.
My sources of information are my own and have now affiliation with
persons or companies mentioned in this article.

Indications so far are that Tuxedo can result in a 30-40% increase
in throughput over conventional DBMS implementations.  An Oracle
test going through Tuxedo's enhanced client/server model saw a 30%
increase on the same hardware.

The place to see all this is at UniForum this week in Dallas, and
thats' where I'm headed.  All the reading of literature and ad 
material is fine, but now I want to do some hands on.  If desired,
I will post a report of what I find to the net, or just e-mail to
those who wish to hear if the response is less.

Tuxedo runs currently on a variety of platforms, and can easily be
ported to any SVR3 (naturally) or above platform.  There is a
completed port to SVR4 on the 386 for ATT Unix.  IMHO Tuxedo and/or
ITran for small 386/486 systems could have enormous impact for Unix
on the Desktop and OLTP.  I can get 20TPS out of a 386 Compaq for
about $20,000.  $1,000/TPS is an enormous reduction in
price/performance versus typical OLTP systems running $7,000+/TPS.
Since Tuxedo also supports 2-phase commit, low-cost
High/Availability systems are now within reach.  Add an extra 30%
in performance to a 386 box and OLTP now becomes realistic in many
hitherto impossible applications.  For anyone that is interested,
for $120,000 you can buy a source license and make a port yourself.

Hope this has been a help.

Brian Douglass			Voice: 702-361-1510 X311
Electronic Data Technologies	FAX #: 702-361-2545
1085 Palms Airport Drive	brian@edat.uucp
Las Vegas, NV 89119-3715
-- 
Brian Douglass			brian@edat.uucp

dafuller@sequent.UUCP (David Fuller) (01/22/91)

I think Brian is mostly right on, but I'd like to respectfully pick a few nits.   
In article <2374@edat.UUCP> brian@edat.UUCP (brian douglass personal account) writes:
>Tuxedo runs currently on a variety of platforms, and can easily be
>ported to any SVR3 (naturally) or above platform.  There is a
>completed port to SVR4 on the 386 for ATT Unix.  IMHO Tuxedo and/or
>ITran for small 386/486 systems could have enormous impact for Unix
>on the Desktop and OLTP.  I can get 20TPS out of a 386 Compaq for
>about $20,000.  $1,000/TPS is an enormous reduction in
>price/performance versus typical OLTP systems running $7,000+/TPS.

I would be a little careful about this number; the figure you quote 
does not include onsite service, history file maintenance, or other
significant costs (not to mention system admin costs).  

ps: I can push a Sequent S3 to somewhere near 20 TPS without Tuxedo 
    today and probably get a S2000/40 to go beyond this.  These are
    essentialy PC-based systems.  I agree that Tuxedo can help 
    most any machine deal with more connections and move folks towards
    service distribution versus data distribution.

>Since Tuxedo also supports 2-phase commit, low-cost
>High/Availability systems are now within reach.  

2-phase commit does not necessarily equate to "high availability"
to me, 2-phase commit has more to do with logical and geographic 
distribution of data than additional reliability.  (2 phase commit is
conventionally the ability of a distributed system to permit unilateral 
abort of all concerned parties in the first phase and abort by only
the comitter in the second phase.  See papers by Gray, et al for 
details.)

>Brian Douglass			Voice: 702-361-1510 X311
>Electronic Data Technologies	FAX #: 702-361-2545
>1085 Palms Airport Drive	brian@edat.uucp
>Las Vegas, NV 89119-3715
>-- 
>Brian Douglass			brian@edat.uucp


-- 
Dave Fuller				   
Sequent Computer Systems		  Think of this as the hyper-signature.
(708) 318-0050 (humans)			  It means all things to all people.
dafuller@sequent.com

clh@tfic.bc.ca (Chris Hermansen) (01/24/91)

In article <2060008@hpcuhc.cup.hp.com> dhepner@hpcuhc.cup.hp.com (Dan Hepner) writes:
>From: dafuller@sequent.UUCP (David Fuller)
>
>>>Since they have only begun to penetrate the market, perhaps someone will be
>>>good enough to explain what they are and when they are useful? 
>
>>The wizzened IBM or Tandem hand would claim that the assertion of "recent
>>market penetration" is equivalent to "UNIX market immaturity".
>>Dave Fuller				   
>
>Great posting, Dave Fuller.

Amen.
>
>There's good reasons why TP monitors have not penetrated the
>Unix market to the extent that they have penetrated the mainframe
>market. It's interesting to note that several/most of the problems which
>drove the original development of CICS (the most significant IBM monitor)
>simply don't exist on Unix.   CICS came into being in an environment
>where timesharing wouldn't be available for years to come, where
>the computing paradigm was a batch job which took over the entire
>computer and ran to completion, and then the next one began.  In
>order to allow for OLTP, some mechanism was needed to have
>a "single, perpetual batch job which interacted with lots of terminals".

[stuff deleted]

Just a few more comments (hopefully germane :-)

One of the big uglies with CICS was (is?) (on VS1, at least) that, using
one large enclosing batch job, all user programs linked in had the potential
of stomping all over other user programs (they all used the same protect key).
Usually, programs so ill behaved merely resulted in an addressing exception,
but I always wondered about program segments going off the ends of arrays...
(WHAT! What do you mean, my bank balance is zero!?!?!).  It's my understanding
that MVS may have fixed this problem.  What VS1 shops used to do was have
two CICS partitions (out of a maximum of 15, I believe) - one for testing,
one for production.  When you had a new program to test, you got the system
manager to relink the test CICS and start up a fresh copy.  When you had it
"working", you put in a request to have it linked to the production CICS,
which usually would happen just after backups when the system was brought
back up.  Point being, there was no dynamic linking...

Software AG made (makes?) a TP monitor called COM-PLETE that gives a
different perspective to the problems above; I believe that most sites
that have given COM-PLETE a try are fairly happy with it (no plug intended).
It uses a "multithreaded" approach, maintaining a separate thread for
each application program.

And one other issue about TP monitors; remember that in the big IBM
environment, the three hundred fast-knuckled folks pounding in data are
using block-mode terminals, not full-duplex VT100s, the point being that
only a tiny amount of system resources are being used in dealing with
user interrupts, and each one of those interrupts is one transaction.
One doesn't often see UN*X systems configured like that :-)

Interesting thread.

Chris Hermansen                         Timberline Forest Inventory Consultants
Voice: 1 604 733 0731                   302 - 958 West 8th Avenue
FAX:   1 604 733 0634                   Vancouver B.C. CANADA
clh@tfic.bc.ca                          V5Z 1E5

C'est ma facon de parler.

rec@indetech.com (Rick Cobb) (01/24/91)

I apologize for taking so long to answer any of these questions, but
there's lots to say.  I'll be posting a few notes about TUXEDO
and iTRAN over the next couple of days; I hope not to get too 
salesy about the whole thing....

But first:

In article <2374@edat.UUCP> brian@edat.UUCP (brian douglass personal account) writes:
> ... a lot of really good stuff about Tuxedo and iTRAN...
>
> As I have heard it, ITran was originally developed as a joint
> venture with ITI, Pyramid, and Oracle, these later two footing the
> bill.  There was some type of falling out... 

This isn't true.  iTRAN was developed by ITI, funded by ongoing OLTP
application development we do for a large health insurance company.  
iTRAN is ours, ours, ours. Oracle and Pyramid were
vendors for the initial iTRAN effort. Now that ITI has demonstrated
what can be done with their products in Unix OLTP, we're quite
friendly.  We just announced a strategic partnership with Pyramid, in
fact.

>					   ... ITI has begun to
> port to other products. It now runs on the Tandem S2 fault
> tolerant computer.  Also why Informix should be available soon.
			   ^^^
No "why", yes it should be available soon.  It's nice to see 
the DBMS companies jumping quickly into the XA interface camp.

iTRAN is also available on the HP800 series, Sequent / PTX, 
and Sun (Sparc only).  Other ports are in progress.

We're demonstrating iTRAN-developed applications in the USL, Pyramid, 
Sequent, and HP booths at UniForum.

> .............................. they're engineering staff is
> supposed to be top notch and to have done a superior job developing
> the add-ons to Tuxedo.  They are definetly worth a close look.

We think so-- and thank you.  

> ITI can be reached at 415-438-2027.

-- 
____*_  Rick Cobb		   	rcobb@indetech.com, 
\  / /  Independence Technologies  	{sun,pacbell,sharkey}!indetech!rcobb
 \/ /   42705 Lawrence Place	   	FAX:   (US) 415 438 2034
  \/    Fremont, CA 94538	   	Phone: (US) 415 438 2004

rec@indetech.com (Rick Cobb) (01/24/91)

In article <51164@sequent.UUCP> dafuller@sequent.UUCP (David Fuller) writes:
>I think Brian is mostly right on, but I'd like to respectfully pick a few nits.   
>In article <2374@edat.UUCP> brian@edat.UUCP (brian douglass personal account) writes:
>>Tuxedo runs currently on a variety of platforms, and can easily be
>>ported to any SVR3 (naturally) or above platform.  There is a
>>completed port to SVR4 on the 386 for ATT Unix.  IMHO Tuxedo and/or
>>ITran for small 386/486 systems could have enormous impact for Unix
>>on the Desktop and OLTP.  I can get 20TPS out of a 386 Compaq for
>>about $20,000.  $1,000/TPS is an enormous reduction in
>>price/performance versus typical OLTP systems running $7,000+/TPS.
>
>I would be a little careful about this number; the figure you quote 
>does not include onsite service, history file maintenance, or other
>significant costs (not to mention system admin costs).  
>
> ... 
>
>>Since Tuxedo also supports 2-phase commit, low-cost
>>High/Availability systems are now within reach.  
>
>2-phase commit does not necessarily equate to "high availability"
>to me, 2-phase commit has more to do with logical and geographic 
>distribution of data than additional reliability.  (2 phase commit is
>conventionally the ability of a distributed system to permit unilateral 
>abort of all concerned parties in the first phase and abort by only
>the comitter in the second phase.  See papers by Gray, et al for 
>details.)

Thanks, Brian, Dave (and Dan Hepner) for summarizing a lot of the
issues.  Let me see if I can put some of this stuff into the 
perspective I push around at ITI (i.e., these aren't necessarily
ITI's opinions, they're mine -- but I'm fairly loud).

1. Dave's right on the 2-phased commit being mostly about data
   distribution, rather than high availability.  In many (certainly not
   all) cases, replicated data with a good application-level log can do
   the redundancy job without the cost.  The fundamental problem in
   high availability is takeover, not redundancy, at least at this
   stage of the game.  TUXEDO is helpful, but not perfect, in this
   regard.  Things will get better.

2. The importance of transaction monitors (TMs) in UNIX goes beyond the
   "two process" client/server model proffered by the DBMS companies.

   For example, Sybase and Ingres both have the ability to have a
   single process serve multiple workstation windows (or client
   terminal processes).  

   Why do I need a TM with these guys?  Because I don't necessarily
   want to field every bit of logic in my application into those client
   processes: I want them as light as possible, so I can handle more of
   them with less RAM, or support more active applications per
   workstation.

   Ingres and Sybase would tell me to write my application as "stored
   procedures" or as "Open Server(TM)s".  Why would I want to buy into
   a proprietary language for my application coding?  If I write my
   application as an "application server" within the ISO 10026 model, I
   can write it in normal languages (C, C++, COBOL,...) using normal
   DBMS interfaces (embedded SQL) and expect some portability across
   DBMS's (and TM's in the long run).  I get essentially the same
   performance and administrative characteristics as the stored
   procedures, because I run them the same way: statelessly and under
   centralized control.  I send them something, they work, they send me
   an answer and go on to work on the next "terminal."

   This "three-process" (actually "n-process") model gets even more
   important when some of the things I want to do in a transaction are
   CPU-intensive instead of I/O intensive.  By splitting the "services"
   within my application onto multiple application server processes, I
   can field the CPU-intense services on high-MIPS boxes, without
   bogging down the I/O intensive processes on the high I/O boxes.

3. The connection model gets even worse when workstations hit the picture.
   A normal user on a workstation will hold 4-10 applications "in
   reserve" as icons.  In many two-process models, these clients will
   run over a megabyte of RAM a piece, both at the client and on the
   resource manager.  (i.e., one on the client box, and one on the
   server) [Some DBMS companies are better on the backend than this.]
   When you're fielding a 1000 + workstation network, the cost of
   RAM/workstation (plus the 5G of RAM for the server) becomes
   prohibitive or impossible to deploy.  By staying stateless and
   connectionless, these clients can be as small as their window
   systems let them be.

4. We try to let the workstation environment do the non-database work
   for us -- but the database intensive work with lots of conditional
   logic needs to run near the data.  For example, in insurance claims
   processing we've focused workstation MIPS on things like image
   presentation and 3270-emulation (for quick access to CICS stuff
   without coding UNIX OLTP transactions). We focused server MIPS on
   the database *and* on the nasty conditional logic (read "rule
   system") necessary to figure out if a claim should be paid.  If the
   rule system starts to require more MIPS than I/O, it may be moved
   onto a computational server.

5. It's important to have facilities to talk to wierd devices like
   point-of-sales terminals out on X.25 networks.  You don't
   necessarily need to multi-thread clients, like in CICS or Pathway,
   since X.25 takes care of most of the multiplexing for you (for the
   applications we've done so far, SVC's [switched virtual circuits]
   have been the way to go).  Besides, some of the best X.25 software
   runs on cheap hardware anyway, and you've got a a transaction
   monitor to make the LAN transparant.  Treating everything like it
   can run curses or X is a mistake: the OLTP world is based on getting
   terminals close to the customer, and the right terminal for a
   customer may be the wierdest damned thing you ever saw.  It's nice
   if you can use the same configuration of software to run against a
   POS terminal *and* an X-window.

There's lots more to say; I'll try to be shorter with my next article.

>>Brian Douglass                        brian@edat.uucp 
>Dave Fuller
>dafuller@sequent.com

Hi Dave! Long time no SIGBAR!
-- 
____*_  Rick Cobb		   	rcobb@indetech.com, 
\  / /  Independence Technologies  	{sun,pacbell,sharkey}!indetech!rcobb
 \/ /   42705 Lawrence Place	   	FAX:   (US) 415 438 2034
  \/    Fremont, CA 94538	   	Phone: (US) 415 438 2004

dhepner@hpcuhc.cup.hp.com (Dan Hepner) (01/24/91)

From: brian@edat.UUCP (brian douglass personal account)

>The RM talks to the TM through the XA
>interface as defined by X/Open.  If a product is XA complient, it
>should be able to talk to the TM.  I am not sure if the FE must
>also be XA compliant to talk to the TM.

Well, this is the _promise_ of the XA interface standard,
but there's plenty of reason to believe that it will never
be that simple.

First off, there is today only a preliminary XA spec, and
it is still subject to changes of arbitrary complexity.  Hopefully,
the basic mechanisms will survive into the final standard,
but almost certainly there will be significant changes.

Secondly, the XA standard will probably allow for different
ways to do things, such that an arbitrary RM will be unlikely
to painlessly interface with an arbitrary TM. 

X/Open uses Application Program (AP) instead of FE.
Eventually, after the XA spec is competed, X/Open will
complete an AP - TM interface.  There is no such spec today.  
So in that sense, the AP will similarly be expected to be 
compliant. 

>I understand Oracle has an XA interface currently
>working, and Informix is rumored to have one in the wings.  This
>makes sense since Informix is ATT's product of choice.

One interpretation of the recent announcements (and lack thereof)
from RM vendors is that they all want to keep as wide a market
as they can for their products.  This only makes sense.  As such,
they can be expected to interface to any TM which shows promise
to extend their market penetration.

>All the reading of literature and ad 
>material is fine, but now I want to do some hands on.  If desired,
>I will post a report of what I find to the net, or just e-mail to
>those who wish to hear if the response is less.

I for one look forward to reading your posted experiences.

>IMHO Tuxedo and/or
>ITran for small 386/486 systems could have enormous impact for Unix
>on the Desktop and OLTP.  I can get 20TPS out of a 386 Compaq for
>about $20,000.  $1,000/TPS is an enormous reduction in
>price/performance versus typical OLTP systems running $7,000+/TPS.

Careful.  There is no good transformation between "benchmarking TPS"
and "TPS on some real application".  Further, there are no good
transformations between the various kinds of "TPS".   What kind of 
TPS are you reporting, (TPC-A? TPC-B? TP-1?), and how was it measured. 
At best, these numbers are only comparable with similar results on
other machines.  At worst, their information content is limited
to a measurement of the cleverness and or sleaziness of the benchmarkers 
who generated the numbers.

>Since Tuxedo also supports 2-phase commit, low-cost
>High/Availability systems are now within reach.
>Brian Douglass			Voice: 702-361-1510 X311

Careful. Very few "off the shelf" systems have good recovery
mechanisms for such configurations, and it's easy to underestimate
the difficulty.  Indeed, 2PC protocols can keep your database
consistent, but they won't help get things started back up again
when something goes down.

Dan Hepner

dafuller@sequent.UUCP (David Fuller) (01/24/91)

>2-phase commit does not necessarily equate to "high availability"
>to me, 2-phase commit has more to do with logical and geographic 
>distribution of data than additional reliability.  (2 phase commit is
>conventionally the ability of a distributed system to permit unilateral 
>abort of all concerned parties in the first phase and abort by only
>the comitter in the second phase.  See papers by Gray, et al for 
>details.)

	I guess I should edit myself here.  The availability of 2 phase
	commit fosters 3 phase commit, whereas multiple commiters (with
	one dedicated master) handling more than one location for valid data.

	In essence, if the master cannot respond to the commit, its 
	subordinates can arbitrate mastery and commit.  This means that 
	the DBMS will be OK if the master's gone down and the remaining
	interested parties will become more interested and organize them- 
	selves to ensure the DBMS is available.  But this requires advanced
	diagnosis of why the original commit failed.  
	
	As we know, asking "why" tends to prove dicey for computers.
	The easiest failures to detect tend to be those least likely
	to occur...

>-- 
>Dave Fuller				   
>Sequent Computer Systems		  Think of this as the hyper-signature.
>(708) 318-0050 (humans)			  It means all things to all people.
>dafuller@sequent.com


-- 
Dave Fuller				   
Sequent Computer Systems		  Think of this as the hyper-signature.
(708) 318-0050 (humans)			  It means all things to all people.
dafuller@sequent.com

Jim_Troester@TRANSARC.COM (01/25/91)

racle.com> <DENAP.91Jan16125351@alta.sw.stratus.com> <2374@edat.UUCP> <51164@sequent.UUCP>
 <1991Jan23.235807.12928@indetech.com>

I've really enjoyed this thread; it seems to me that comp.databases is
a reasonable home for oltp discussions.

I just wanted to briefly share my view on the evolution of TP Monitors
and the direction that they will take on Unix in the 90's.

---------------------------------

_History_

TP Monitors grew out of a number of application projects in the late
1960's and early 1970's.  A number of independent groups observed
that there were common elements in all on-line systems.  They culled
out the APIs to those elements and the TP monitor was born.

Much of the infrastructure that is commonplace in the nineties simply
did not exist circa 1970.  Layered communications architectures,
standard data base APIs, and presentation services were at best
research curiosities.  Most of these early TP Monitors were products
of hardware vendors simply because of economy of scale: all the
related infrastructure had to be supplied with the monitor or taken
from a proprietary OS and its tools.

There was a strong predisposition to ``large'' centralized systems
because communications were expensive and slow while the software
infrastructure was nonexistent.

The 1980s saw tremendous changes in the data processing industry that
have set the stage for a revolution in distributed computing.

Hardware advances, particularly in chip technology, have reduced
expense and increased capacity by a factor of 20 in a decade.
Additionally, communications costs have plummeted.

The software infrastructure is now (mostly) in place.  SNA, OSI, and
TCP-IP all ease data communications.  Relational databases accessed
through SQL are mature.  UNIX is quickly becoming the predominant
operating environment for medium sized systems.  Even proprietary
operating systems typically provide POSIX interfaces, C, and the C
libraries.

The stage is now set for a new type of TP Monitor that integrates the
changes of the eighties to provide a cost-effective distributed
environment.
_UNIX_Distributed_TP_Monitors_

The modern TP monitor must co-exist to be successful.  All general
purpose software built on top of "open systems" must be open.  This
means that interfaces must be published and care must be taken to
allow the systems integrator to "plug in" components.

The computing milieu of the 1990's is predisposed to
heterogeneous distributed systems.  Microprocessor technology, high
speed communications, and the appropriate software infrastructure has
shifted the economy of scale from large centralized systems to
clusters of microprocessor based servers surrounded by large numbers
of workstations or pcs.

The challenge for the TP Monitor is to integrate de jure
standards, de facto standards, and industry leading products.
For the development environment, the TP Monitor provider must select a
set of tools that complement each other and allow for the rapid
building of applications.  The Monitor must ensure that the execution
environment contains the mix of features necessary for efficient and
secure DTP.  Further, it must provide a framework on which a seamless
administration environment can be constructed.

+------------------------------------------------------------------+
| James Troester     troester@transarc.com         (412) 338-4469  |
| Transarc Corp.,     707 Grant Street,     Pittsburgh, PA  15219  |
|            All statements are mine and not Transarc's            |
+==================================================================+

brian@edat.UUCP (brian douglass personal account) (01/29/91)

In article <51386@sequent.UUCP> dafuller@sequent.UUCP (David Fuller) writes:
>>2-phase commit does not necessarily equate to "high availability"
>>to me, 2-phase commit has more to do with logical and geographic 
>>distribution of data than additional reliability.  (2 phase commit is
>>conventionally the ability of a distributed system to permit unilateral 
>>abort of all concerned parties in the first phase and abort by only
>>the comitter in the second phase.  See papers by Gray, et al for 
>>details.)
>
>	I guess I should edit myself here.  The availability of 2 phase
>	commit fosters 3 phase commit, whereas multiple commiters (with
>	one dedicated master) handling more than one location for valid data.
>
>	In essence, if the master cannot respond to the commit, its 
>	subordinates can arbitrate mastery and commit.  This means that 
>	the DBMS will be OK if the master's gone down and the remaining
>	interested parties will become more interested and organize them- 
>	selves to ensure the DBMS is available.  But this requires advanced
>	diagnosis of why the original commit failed.  
>	
>	As we know, asking "why" tends to prove dicey for computers.
>	The easiest failures to detect tend to be those least likely
>	to occur...

This is, to a degree, what I was talking about in that two-phase
commit helps to provide high availability.  At UniForum I discussed
this point at length with USL about having a replicated database
functionality through the two-phase commit protocol to give fail
over to a sub-controler that now becomes the master.  USL has great
interest in doing something like this in Tuxedo, but it is NOT
currently possible.  They're response was "We don't see why we
can't do this in the future".

We built this kind of functionality into a DOS based system built
around Faircom libraries.  Transactions came across the network
either from workstations or gateways that act like terminal servers
receiving serial input.  The transactions hit the main controler,
which would either process it itself, pass it on to the
sub-controler, or do both.  Both machines do a "I'm alive.  You
alive?"  "Yes I'm alive.  You alive?" type of interrogation.  If
the master fails, the sub takes over all processing, while
maintaining a sync file of transactions.  When the master comes
back alive, it processes everything in the sync file, catching up
to the sub and resuming its duties as the master.  

The keeping a sync file on both machines is no real big deal for
Tuxedo, or the replication of data.  The fail over protocol, as
Dave mentioned is the sticking point.  But I think we can expect to
see something like this in the near future.

Rick Cobb, I believe, also mentioned that is important to make your
terminal applications as independant from your backend processing
as possible, because sometimes you hook in weird stuff.  This
is absolutely true.  In our application our terminals are Slot
Machines!  People sign up at the casino and receive a "frequent
player" card.  The more money they put into a slot machine, the
more points they receive.  All of this transmitted back to a host
and processed in real time.  So, who knows were your data is coming
from sometimes.

Does anybody know about TOP END from NCR, or TRANS-ARC?  Maybe its
time to widen this discussion to Transaction Monitors and not just
Tuxedo?

Oh, BTW, I will post some of what I found about Tuxedo and other
OLTP related products at UniForum.  But first I have to tell
management.

Brian Douglass			Voice: 702-361-1510 X311
Electronic Data Technologies	FAX #: 702-361-2545
1085 Palms Airport Drive	brian@edat.uucp
Las Vegas, NV 89119-3715
-- 
Brian Douglass			brian@edat.uucp

brian@edat.UUCP (brian douglass personal account) (01/29/91)

(Dan Hepner) writes:
|
||I understand Oracle has an XA interface currently
||working, and Informix is rumored to have one in the wings.  This
||makes sense since Informix is ATT's product of choice.
|
|One interpretation of the recent announcements (and lack thereof)
|from RM vendors is that they all want to keep as wide a market
|as they can for their products.  This only makes sense.  As such,
|they can be expected to interface to any TM which shows promise
|to extend their market penetration.

Informix & Sybase announced at Uniforum endorsement for TransArcs OSF
TM.  So I guess this statement is right on the money.

||All the reading of literature and ad 
||material is fine, but now I want to do some hands on.  If desired,
||I will post a report of what I find to the net, or just e-mail to
||those who wish to hear if the response is less.
|
|I for one look forward to reading your posted experiences.
|
||IMHO Tuxedo and/or
||ITran for small 386/486 systems could have enormous impact for Unix
||on the Desktop and OLTP.  I can get 20TPS out of a 386 Compaq for
||about $20,000.  $1,000/TPS is an enormous reduction in
||price/performance versus typical OLTP systems running $7,000+/TPS.
|
|Careful.  There is no good transformation between "benchmarking TPS"
|and "TPS on some real application".  Further, there are no good
|transformations between the various kinds of "TPS".   What kind of 
|TPS are you reporting, (TPC-A? TPC-B? TP-1?), and how was it measured. 
|At best, these numbers are only comparable with similar results on
|other machines.  At worst, their information content is limited
|to a measurement of the cleverness and or sleaziness of the benchmarkers 
|who generated the numbers.

The application I am building mimics TP1 almost exactly.  I have
customers, I have accounts, I have deposit and withdrawl stations.
I do two or three updates, an insert, a commit and I'm done.  Other,
real world applications that must also do deletions, nested loops,
ad-hoc queries, and all those other things that RDBMS do so well
absolutley must be careful looking at numbers.  All the ad claims
in the world are just so many paper-airplanes in pre-production
until you define your application requirements, sit down with a
possible product and try it out in YOUR world.  And not some
company lab where everything is sanitized.  Numbers are good for
comparitive purposes, but always try it out on your turf.

As a side note, a fella at Wyse said they did some prelim
benchmarks on their new 9000 series using Informix Online doing
TP1.  Unaudited numbers are 40TPs/processor with 8 maximum with a
cost of about $1500/TPS.  Official numbers should be coming out
over the next few months.

||Since Tuxedo also supports 2-phase commit, low-cost
||High/Availability systems are now within reach.
||Brian Douglass			Voice: 702-361-1510 X311
|
|Careful. Very few "off the shelf" systems have good recovery
|mechanisms for such configurations, and it's easy to underestimate
|the difficulty.  Indeed, 2PC protocols can keep your database
|consistent, but they won't help get things started back up again
|when something goes down.
|
|Dan Hepner

Yep, so I've seen, and been told.  But this does seem to be
something the TMs are interested in implementing.  See an earlier
comment.


Brian Douglass			Voice: 702-361-1510 X311
Electronic Data Technologies	FAX #: 702-361-2545
1085 Palms Airport Drive	brian@edat.uucp
Las Vegas, NV 89119-3715
-- 
Brian Douglass			brian@edat.uucp

denap@alta.sw.stratus.com (Tom DeNapoli) (01/29/91)

>>>>> On 24 Jan 91 16:16:43 GMT, Jim_Troester@TRANSARC.COM said:

Since I originated this thread I thought I'd jump back in...

troester> I just wanted to briefly share my view on the evolution of TP Monitors
troester> and the direction that they will take on Unix in the 90's.

troester> ---------------------------------

troester> _History_

troester> TP Monitors grew out of a number of application projects in the late
troester> 1960's and early 1970's.  A number of independent groups observed
troester> that there were common elements in all on-line systems.  They culled
troester> out the APIs to those elements and the TP monitor was born.

troester> Much of the infrastructure that is commonplace in the nineties simply
troester> did not exist circa 1970.  Layered communications architectures,
troester> standard data base APIs, and presentation services were at best
troester> research curiosities.  Most of these early TP Monitors were products
troester> of hardware vendors simply because of economy of scale: all the
troester> related infrastructure had to be supplied with the monitor or taken
troester> from a proprietary OS and its tools.

troester> There was a strong predisposition to ``large'' centralized systems
troester> because communications were expensive and slow while the software
troester> infrastructure was nonexistent.

I agree wholeheartedly.  The definition of what a TP Monitor is,
in todays terms, is of great interest to me.  The 'classic' monitor
was a single package providing many of the tools that have since
become, or are becoming, commodities in AP programming.  I give
you: proprietary forms systems vs X, proprietary IPC vs RPC,
proprietary transaction semantics vs XA, proprietary AP
environment configuration vs ?DME?.  They were TM, RM, CM and
mini OS rolled into a single, behemoth, package. 

Todays Monitors must be able to co-exist with these systems, due
in part, to large installed bases like CICS.  But their function,
IMHO, will move towards one of system integrators.  Leveraging on
emerging standards like DCE, X/Open DTP, etc. and adding some
value.  Not trying to BE the TM, RM etc but living with whatever
flavor the AP decides to use.  

The definition of TP Monitor will no doubt change.  But, what
customers come looking for are TP Monitors, so that's what
they're still called.  What they'll become is AP environment
configurators/facilitators. 

-Tom 

  Tom DeNapoli              | Stratus Computer, Inc.
  denap@alta.sw.stratus.com | 55 Fairbanks Blvd M23EN3
  uunet!lectroid!alta!denap | Marlboro, MA 01752
--
  Tom DeNapoli              | Stratus Computer, Inc.
  denap@alta.sw.stratus.com | 55 Fairbanks Blvd M23EN3
  uunet!lectroid!alta!denap | Marlboro, MA 01752

richieb@bony1.bony.com (Richard Bielak) (01/30/91)

In article <1991Jan23.205907.5477@tfic.bc.ca> clh@tacitus.UUCP (Chris Hermansen) writes:

>
>One of the big uglies with CICS was (is?) (on VS1, at least) that, using
>one large enclosing batch job, all user programs linked in had the potential
>of stomping all over other user programs (they all used the same protect key).
>Usually, programs so ill behaved merely resulted in an addressing exception,
>but I always wondered about program segments going off the ends of arrays...
>(WHAT! What do you mean, my bank balance is zero!?!?!).  It's my understanding
>that MVS may have fixed this problem.  What VS1 shops used to do was have
>two CICS partitions (out of a maximum of 15, I believe) - one for testing,
>one for production.  When you had a new program to test, you got the system
>manager to relink the test CICS and start up a fresh copy.  When you had it
>"working", you put in a request to have it linked to the production CICS,
>which usually would happen just after backups when the system was brought
>back up.  Point being, there was no dynamic linking...

[...stuff deleted...]

From reading few of the TUXEDO manuals (I only have manuals, no
software), I know that TUXEDO uses shared memory for client/server
communications (at least for processes on one machine). 

Can a program using TUXEDO corrupt the shared memory with runaway
pointer error and mess up the entire system? Or is the shared memory
protected?

In the distant past I worked on a PDP-11/RSX-11M system, where the TP
monitor used shared memory. If some program had a bug and corrupted
the shared memory, the whole system went down the tubes.

Just wondering.


...richie
-- 
+----------------------------------------------------------------------------+
| Richie Bielak  (212)-815-3072    | "The sights one sees at times makes one |
| Internet:      richieb@bony.com  |  wonder if God ever really meant for    |
| Bang:       uunet!bony1!richieb  |  man to fly."  -- Charles Lindbergh     |

rick@tetrauk.UUCP (Rick Jones) (01/30/91)

In article <DENAP.91Jan29102725@alta.sw.stratus.com> denap@alta.sw.stratus.com (Tom DeNapoli) writes:

>The definition of what a TP Monitor is,
>in todays terms, is of great interest to me.
>[ ... ]
>But their function,
>IMHO, will move towards one of system integrators.  Leveraging on
>emerging standards like DCE, X/Open DTP, etc. and adding some
>value.  Not trying to BE the TM, RM etc but living with whatever
>flavor the AP decides to use.  
>
>The definition of TP Monitor will no doubt change.  But, what
>customers come looking for are TP Monitors, so that's what
>they're still called.  What they'll become is AP environment
>configurators/facilitators. 

This is indeed true.  We are using Tuxedo/T to extend our range of business
software, because our customers are increasingly demanding better transaction
security.  We have always been in the mini-computer and Unix sector, so trying
to emulate a mainframe TP monitor is not an issue.  Furthermore, in a
conventional software architecture of "one executable per application",
transaction security can be obtained simply by using a database which supports
transaction semantics.

A client/server architecture allows the separation of the user front-end from
the database operations, which is important both as a design issue, and as a
way to increase the flexibility of the whole system.  But there are also many
ways of running a client/server architecture without any form of TP monitor.

The key contribution of Tuxedo is to support a client/server architecture where
transaction management is handled at a higher level than any of the executing
processes.  Thus a client process can start a transaction, and then call
services in any number of servers, locally or remote, and the work done forms
part of that global transaction.  Only the process which started the
transaction may commit or abort it, but any participating process may signal
failure, preventing the originator subsequently making an erroneous commit.
The XA interface is a vital component, allowing the distributed transaction
management to operate on multiple heterogeneous databases.

It also allows a sort of "virtual multi-threading" in servers, in that when a
server has completed a segment of work for one transaction, it is free to do
work for any other transaction even though the first transaction has not yet
been completed.

The advantages for development are greater modularity, in that servers can be
treated as a form of dynamic library.  A service simply performs a requested
operation, and either succeeds or fails.  The semantics of the surrounding
transaction are not its business.

The advantages for operation are extensive decentralisation and distribution of
operations and resources, and achievement of load balancing by the simple
expedient of running multiple copies of heavily used servers.

We are not using the Tuxedo tools for building front-ends or servers, since we
are using our own object-oriented techniques (using Eiffel) to build these
components.  We also happen to be using the Tuxedo/D database, since it is a
lot easier to integrate than ESQL - that will come later!  But we can in
principle change the database backend to anything XA compliant without touching
the transaction management aspects.

Thus to us Tuxedo/T is very much an integration tool, as Tom DeNapoli
suggested.  In fact it is debatable whether the advantages of encapsulation
provided by object oriented methods are compatible with transactions without
the help of either an OODB or a TP monitor to manage transactions at a global
level.

Anyone know of an XA compliant OODB?

-- 
Rick Jones
Tetra Ltd.  Maidenhead, 	Was it something important?  Maybe not
Berks, UK			What was it you wanted?  Tell me again I forgot
rick@tetrauk.uucp					-- Bob Dylan