[alt.hypertext] HYP8803B.BIB Bibliography Notes

deh0654@sjfc.UUCP (Dennis Hamilton) (04/01/88)

%A Paulina Borsook
%T New pains, new gains: Distributed database solutions are on their way
%J Data Communications
%V 17
%N 3
%D March, 1988
%P 149-162
%O Feature article on software
%K Distributed DBMS Relational Codd Date Stonebraker SQL Starburst R* 
SQL/DS DB2 OS/2 Extended Edition
%X This article attempts to explain the issues and difficulties that
surround operating a distributed database system with both local
transparency and local autonomy.  This is basically a topical survey,
but there are some useful sidebars on who the players are, on
Chris Date's 12+1 rules for distributed databases, and on Stonebraker's
7 rules (as modifed by Gartner Group).  IBM's approach is reviewed,
and there is a checklist of things to look for in a distributed
database arrangement, what to decide.  These are also applicable
to distributed hypertext systems, of course, including
   1. How do you decide where a particular file should be located?
   2. How much redundancy is needed for reliability and/or performance?  
That is, how much data duplication should be allowed or arranged?
   3. Should the system be fully distributed or involve a central
server or coordinator?
   4. How frequently should updates be perpetuated across the
network?  Should it be periodic or as updates are actually
introduced in any locality?
   5. In the face of failures and isolation of subnetworks, what is
the policy for continuing operation, acceptance of updates, and
eventual resynchronizaation of the network?
   [dh: 88-03-17]

%A Lindsy Van Gelder
%T A Feast of Knowledge
%J Ms.
%V 16
%N 10
%D April, 1988
%P 92
%O Technology feature
%K HyperCard HyperTalk Goodman Handbook Atkinson review
%X This favorable review gives a terrific sense of the potential and
flavor of hypertext.  For example,
   "Imagine that you're a budding historian, and you're browsing
on your computer through a stack of information about the kings of
France.  As you're reading about the reign of Louis IX, you find
yourself wondering about what was happening to architecture in the
same period.  With a few mouse clicks, you call up a stack listing
all the cathedrals built in the Middle Ages.  In another click or
two you're reading about the carving of the northern rose window at
Chartres in the 1230s.  At that point you might want to look at a
picture of the window, or a map of 13th-century France.  Or you
might want a list of hotels or restaurants in modern Chartres for
your next vacation.  Just click."
   The importance of hypertext and its HyperCard forebringer is
identified through Danny Goodman's observation that HyperTalk is
"programming for poets."  For Van Gelder, "For the first time,
computer novices (like the historian ... above) can design their
*own* programs."
   Noting that HyperCard has only been out since the summer of 1987,
Van Gelder observes that the contribution of stackware is phenomenal,
and a lot of inventiveness is being shown.
   The fact that HyperCard is effectively being given away does not
go without notice (or the fact that, like Microsoft Windows, the
packages will sell a lot of hard disks).  And there's some information
I hadn't known about Bill Atkinson's role: "His deal with Apple
specifies that if the company ever stops bundling HyperCard with
new Macs, ownership reverts to him -- at which point he'll give
it away.  He's also promised that if and when there's a similar
program for IBM compatibles, he'll make public the information that
would allow the two computers to exchange stacks."
   [dh:88-03-17]

%A Stuart J. Johnston
%T Knowledge Garden Hypertext Learning Tool Made for Writers
%J InfoWorld
%V 10
%N 11
%D March 14, 1988
%P 21
%K Textpro Knowledgepro Knowledge Garden hypertext threaded text
%X Textpro is a $15 tool for reading and writing small hypertext
documents on PC and PS/2 compatibles.  It was developed using
Knowledgepro and is designed to let non-technical users experiment
with hypertext.  The product is intended to generate interest in
the $495 Knowledgepro product.
  Knowledge Garden Inc., 473-A Malden Bridge Road, Nassau NY 12123;
(518) 766-3000.
  [dh:88-03-17]

%A David K. Gifford
%A Roger M. Needham
%A Michael D. Schroeder
%T The Cedar File System
%J Comm. ACM
%V 31
%N 3
%D March, 1988
%P 288-298
%O Special Section on Operating Systems
%K Distributed Systems directory structures files systems
cache consistency file replication immutable files
%X "The Cedar File System (CFS) is a workstation file system
that provides access to both a workstation's local disk
and to remote file servers via a single hierarchical
name space.  CFS supports a group of cooperating
programmers by allowing them to both manage local
naming environments and to share consistent versions of
collections of software." -- Abstract
  CFS is part of the Cedar experimental programming
environment developed at the Xerox Palo Alto Research
Center.  It is interesting in terms of hypertext issues
because of what it tries to do in terms of autonomous
derivation and coordination of updates.
  CFS does *not* provide access to shared, mutable files.
Instead, "it provides a storage and naming system for
immutable files with updates resulting in new versions
of immutable files. [p.288]"  So there is a flavor of
the source-code-maintenance coordination problem here, too.
  A real problem: "Most distriubted file systems cache
information they have retrieved from remote servers in
order to improve performance.  A cache is a set of local
copies of remote data. ... It is possible for a cache to
become inconsistent when the remote data copies are not
discarded.  The problem of keeping local cache copies
up to date to changes in remote data is knows as the
cache consistency problem [p.289]"
  By using immutable files, CFS eliminates the cache
consistency problem -- any derived copy always agrees
with the original, since the original never changes.
"As one might expect the complexity of dealing with
sharing must reappear in another part of the system [p.290]"
The problem reappears as one of version management and
problems of getting users of the data to recognized and
notice the (potentially disparate) new versions that
are being put back into the system.  The authors find that
this is relatively easy to handle with applications software
in the cases studied.
  The CFS naming system is hierarchical, with automatic
generation of a version extension on every file name, as
in Tenex.
  It is possible to "attach" a remote file by giving it
a local name.  It looks like a copy process, but the
actual copy is only made as needed.  (This metaphor works
because the remote file won't change after attachment,
which means that the attachment process also freezes the
version being connected, presumably.)
  The system only keeps some number of (unopened) versions,
and the oldest are automatically deleted.  The system also
keeps complete copies, rather than variant information,
but this is an implementation detail, really.  For hypertext,
one might want to do things rather differently, since
localized modifications to a large corpus would seem to be
the order of the day.  Version numbers (and hence any names)
are never re-used by the system, so there is never any
confusion resulting from removal of an attached file and
its subsequent replacement by another of the same name.
Creation times are propogated as part of file information
to all copies, also.
  The application of interest has to do with DF files.  DF
files are definitions of collections of other files into "subsystems"
(it being possible that executable programs, sources, documentation,
etc., be carried in the list).  The point is to allow discrete
files, subdirectories, etc., as usual, but to also identify those
things which must have subsystem "consistency."  Although the
DF process is implemented using a file, it is more appropriate to
think of the DF as defining the collection as under DF management.
  So, when one checks out (brings over) the DF of a project or
other organized, related system of files, one is guaranteed a
consistent view.  When changes are completed and the works is
returned or posted, the central DF is updated in such a way that
new requesters automatically see the latest consistent version, as
if atomically updated all around.  (DFs can link/attach other
DFs, and in those cases, there is a way to make the link soft
enough that the latest target version is always used, at least
up to the point of some sort of commitment, say a test run?  The
binding to specific versions of other DFs comes as part of the
system release process.  [p.293]  This is interesting enough to
require detailed exploration as a separate topic.)
  There is a problem about incorrect version numbers, scrambled
system files, etc., for any number of reasons.  The DF user can
take advantage of the time stamps in attachments to try to get
the right version even though things have been jumbled up for one
reason or another.
  The DF mechanism is separate from the Cedar directory system.  It's
possible integration is discussed [p.297].
  [dh:88-03-23]

%A David R. Cheriton
%T The V distributed system
%J Comm. ACM
%V 31
%N 3
%D March, 1988
%P 314-333
%O Special Section on Operating Systems
%K distributed systems protocol architecture transparency
inter-process communication
%X "The V distributed system was developed at Stanford
University as part of a research project to explore
issues in distributed systems.  Aspects of the design
suggest important directions for the design of future
operating systems and communicating systems." -- Abstract
  The system uses message-passing for inter-process
communication, achieving remote procedure call via a
request and reply protocol.
  There is a process identification scheme, and a group
identification scheme: processes can belong to several
groups and groups can contain any number of processes.
The multicasting system uses group names, but can narrow
the distribution by specifying qualifying delivery
conditions.  But there is a way to effectively broadcast
a request to multiple recipients in a fairly transparent
way.
  A lot of the report is devoted to performance issues
concerning low-level operations, and decisions about
location of functions in the distributed kernel or at
application levels.  Some lessons learned: logical host
identifications is a bad idea, if processes can migrate;
likewise for the idea of a local group.
  The naming system is three-level: character-string
names (based on Unix-style hierarchies and local-working
directory relativization), object identifiers, and entity
identifiers.  Each object manager handles character
identifiers for its objects, there is no centralized
name service.  There is a name-handling group to which
all object managers belong, and they register with unique
name prefixes.  The prefixes are negotiated in any number
of ways, the only requirement being that they are unique.
(The bindings between prefixes and object managers is
cached by smart programs, and there is a lot of caching
that would seem to be helpful for this system.  A global
naming system is married into this too, for reliability.
[This isn't explained in the paper, though. p.325]
  (Note that cache entries can be stale, but the system
will eventually get things in sync. again.)
  When an object has been bound from a directory name,
the object manager returns an identifier for more-efficient
future use.  This identifier includes the identification
of the object manager (or one of its "ports") and a pass-back
identification generated by the object manager and which it
will use to recall the connection.  This avoids need for
global agreement at the fine level of object identification,
but we have to watch the object manager's identifier.  This
is an *entity*identifier* and the object identifier is
arranged to not have a lifetime any longer than the entity
identifier (which is lost in a crash).  So the object
identifier is for a current (dynamically maintained) binding,
not being a permanent object identifier, per se.  Because
entities can be distributed and replicated, and can crash,
etc., there is a hierarcy of binding here too, sort of along
the lines of caching.  The object manager as a whole may
have a group identifier, and instances have instance
identifiers.  There are levels of invalidation and then
retrying that rediscover the more-efficient, most-specific
usable identities anew.
  This depiction doesn't make the divisions too clear.  On
pp.326-327, it is clear that entity identifiers are
fixed-length codes that identify processes, groups of
processes, and transport-level communication endpoints.  Entity
identifiers are host-address independent.  An entity
identifier moves with the entity.  The kernel maps (caches)
entity identifiers to host addresses and there is a group
for coordinating this.  Problems include avoiding duplicate
entity identifiers and avoiding re-use too quickly (e.g.,
before an old use can get flushed out after a crash).  Not
enough information is given about any structure imposed on
entity identifiers and their different uses.  There are some
references to other papers, though.
  [dh:88-03-23]

%A Jan L. Guynes
%T Impact of System Response Time on State Anxiety
%J Comm. ACM
%V 31
%N 3
%D March, 1988
%P 342-347
%O Management of Information Systems -- Research Contribution
%K Human Factors anxiety system response time
%X "Recent research has show that user satisfaction and
productivity are affected by system response time.  Our
purpose is to provide the results of empirical research on
system response time and its effect on state anxiety.  Test
subjects were classified as Type A or Type B personality in
order to determine if personality type had any affect on the
relationship between system response time and state anxiety.
The results show that both Type A and Type B personalities
exhibit a statistically significant increase in state
anxiety during poor or variable system response times." -- Abstract
  Cites recent study suggesting that good system response time
is *the* major factor contributing toward user satisfaction.  Other
researchers are reported to have shown that user productivity
increases during periods of good response time. [p.342]
  Key definition: State anxiety is equated with feelings of
tension, nervousness, and apprehension which fluctuate over
time, whereas trait anxiety is equated with fear, or threat to
one's self-esteem.  The study was concerned only with state
anxiety.
  [dh:88-03-23]

%A Dennis Linnell
%T SAA: IBM's Road Map to the Future
%J PC Tech Journal
%V 6
%N 4
%D April, 1988
%P 86-105
%K OS/2 Extended Edition SAA Common User Access CUA CPI CCS CA
fast path interactions accelerators nonprogrammable interfaces
graphic interfaces processing requests navigation requests
%X Enormous in scope, IBM's System Application Architecture
promises to standardize application development across diverse
hardware and software environments.  Though not there yet,
SAA should take its first stand with the release of OS/2
Extended Edition 1.1 ... -- Abstract
  This is a survey article which tries to show the scope of
SAA and how it will try to unify what applications and users
do, while accomodating the great variety of interface devices
(non-programmable, programmable, and graphic [highly
programmable?]) and connected applications, via a large
variety of host technologies from PC to large mainframe, over
a variety of communication methodologies.
  The goal of providing greater consistency of usage among
application products has an important impact on the portablility
of skills and training and FAMILIARITY from environment to
environment, application to application.  Like it or not, these
considerations apply to any diverse propogation of Hypertext
methodology too.
  Perhaps the most significant part of this article is the
description of Common User Access (CUA), the conventions by
which applications are able to interact consistently over a
tremendous range of interface capabilities.  Graphical CUA
environments are essentially like those being used for Windows
2.0 and the OS/2 Presentation Manager.  There is a strong
effort to impose syntactic consistency within an environment
(methods and forms of presentation, ways of communicating
action requests, etc.)  Across environments, there is
a consistent mapping to forms that are available, allowing the
user to move to less-or-more capable interfaces and find
recognizable counterparts of the familiar, even though it is
not possible to secure complete syntactic correspondence.  The
functions do correspond, and it is meant to be obvious to
skilled users how to adjust their approach to the interface.
  Another concern has been to insure that experienced users
have adequate shortcut and interaction-accelerator techniques,
all part of the standard methodology, so that their high
performance demand is not frustrated by the presence of step-by-step
fully assisted interaction as well.
  The navigation of application-processing functions is broken into
two regularized parts: requests to process information and requests
to navigate through the application.  The uniform dialog interface
addresses both of these activities and also provides for transfer
of information between panels (basically, interactive windows) and
between applications.
  The "SAA CUA Panel Design and User Interaction" manual provides
definition of how these qualities are to work, and also provides
a demonstration diskette for illustrating CUA concepts.
  The other end of these arrangements are seen via standard program
interfaces (CPI: Common Programming Interface).  IBM is going to
provide portable, CPI support for its ANSI-standard versions of
FORTRAN 77, COBOL 85, and C.  There will also be common use of
CSP (the Cross System Product application development ssytem),
REXX (Restructured Extended Executor, a command language), and the
Structured Query Language (SQL).  There will be other services, at
a variety of levels, including ones for casual database inquiries,
working in graphics, etc.
  The article walks through a lot more, and provides a complete
list of currently-available publications on SAA.  The 9-character
document number can be used in ordering through local IBM offices:
  GC26-4341 SAA Overview
  SC26-4351 SAA CUA Panel Design and User Interaction
  SC26-4362 SAA Writing Applications: A Design Guide
  SC26-4355 SAA Application Generator Reference
  SC26-4353 SAA C Reference
  SC26-4354 SAA SAA COBOL Reference
  SC26-4348 SAA Database Reference
  SC26-4356 SAA Dialog Reference
  SC16-4357 SAA FORTRAN Reference
  SC26-4359 SAA Presentation Reference
  SC26-4358 SAA Procedures Language Reference
  SC26-4349 SAA Query Reference
  S544-3417 Intelligent Printer Data Stream Reference
  GC24-1584 Introduction to Advanced Program to Program Communication
  SC30-3431 Introduction to IBM's Open Network Management
  GC23-0765 Office Information Architectures: Concepts
  GA27-3093 SDLC General Information
  GC30-3072 SNA Concepts and Products
  SC30-3269 SNA Formats and Protocols, Architecture Logic for LU 6.2
  SC30-3422 SNA Formats and Protocols, Architecture Logic for Type 2.1 Nodes
  SC30-3098 SNA Formats and Protocols, Distribution Services
  GC30-3429 SNA Management Services Overview
  GC30-3073 SNA Technical Overview
  (mangled) Token Ring Network Architecture Reference
  GC30-3084 Transaction Programmer's Reference for LU 6.2
  GA27-3761 X.25 Interface for Attaching SNA Nodes to Packet Networks
  GA23-0059 3270 Data Stream Programmer's Reference
    It can be seen from this list that the needs of IBM's mainframes
and network arrangements are being embraced.  But the impact of SAA
is going to be felt more widely, and maybe more quickly than it might
have first appeared.
  [dh:88-03-27.  There are a large number of reasons to pay attention to this
material, even if you have no interest in applications that operate on larger
IBM products than the PC and PS/2 line.  Not only is the giant on the move,
this will automatically cultivate a gigantic infrastructure of developers and
users who are more-or-less acquainted with and comfortable with this 
approach, even if they are not tacitly aware of and attentive to SAA itself.
Also, there is always lots of good work in this kind of IBM effort, so here's an
opportunity to build on already-solid effort, adapt good ideas, recognize
settings and uses that might not have been accomodated, and generally avoid
re-inventing the wheel badly (since even if that is what IBM has done, 
it won't be perceived that way in the world at large).  The comprehensive 
nature of SAA makes it *necessary* to understand how to be consistent with it.
It also might help you stay out of the way ofApple, Inc.'s next look-and-feel 
tantrum, wherever that might be directed.]

%A Mark Brownstein
%T Panasonic 200MB WORM Drive Uses Built-In SCSI
%J InfoWorld
%V 10
%N 13
%D March 28, 1988
%P 25
%O Hardware Announcements
%K Panasonic 200MB WORM SCSI LF-5000
%X Panasonic's $2,595 CD-ROM WORM Drive handles 200MB 5-1/4" cartridges
and can be daisy-chained via SCSI with up to six other devices.  The
use of SCSI is reported to triple access speed, with 230ms random
access and 2.5Mbps transfer rate.
  The drive works on the Macintosh SCSI port, or on PC's with a
SCSI interface card.  An RS232 interface is in the works.
  Panasonic Industrial Co., 2 Panasonic Way, Secausus, NJ 07094;
(201) 348-7000.
  [dh:88-03-28]

%A (staff)
%T CCITT Function Library announced by Texas Instruments for the TMS34010
%J Texas Instruments Pixel Perspectives
%N 17
%D March, 1988
%P 3
%O TI Promotional Publication
%K CCITT Group 3 Group 4 FAX Data Compression CD-ROM
%X "Texas Instruments is now offering the CCITT Function Library for
the TMS34020 [graphics processor].  The new software library enables
the 34010 to operate as an embedded controller for the compression
and decompression of monochrome images, supporting the Group 3 and
Group 4 standards established by the Consultative Committee for
International Telegraph and Telephone (CCITT).
   "Depending on the source data complexity, the CCITT standard
achieves compressed rations of 20:1 or better.  The standard has also
been adopted by optical disk manufacturers because volumes of images
and documents can be stored on CD-ROMs as compressed data. ...
   "The CCITT Function Library consists of two C-callable functions
written in 34010 assembly language and supplied as source code that
produces 38Kbytes of object code.  There is a $3,000 one-time
unlimited license for the software, available on PC-compatible
5-1/4" floppy or on VAX-compatible 1600bpi tape.  (This and many
other development tools are offered with 25% discount until May 31.)
  [dh:88-03-30]

%A James H. Coombs
%A Allen H. Renear
%A Steven J. DeRose
%T Markup Systems and the Future of Scholarly Text Processing
%J Comm. ACM
%V 30
%N 11
%D November, 1987
%P 933-947
%K Office Automation Word Processing Text Processing Editing Software Management
Human Factors Document Interchange Generic Coding Stucture-Oriented
Editing
%X "Markup practices can affect the move toward systems that
support scholars in the process of thinking and writing.
Whereas procedural and presentational markup systems retard
that movement, descriptive markup systems accelerate the
pace by simplifying mechanical tasks and allowing the
authors to focus their attention on the content." -- Abstract
   "In the last few years, scholarly text processing has
entered a reactionary stage.  Previously, developers were
working toward systems that would support scholars in
their roles as researchers and authors.  Building on the
ideas of Bush, people such as Nelson and van Dam prototyped
systems designed to parallel the associative thought
processes of researching scholars.  Similarly, Engelbart
sought to augment human intellect by providing concept-manipulation
aids.  Reid developed Scribe, freeing authors from formatting
concerns and providing them with integrated tools for
bibliography and citation management.  Although only a small
percentage of scholars was exposed to these ideas, the movement
was toward developing new strategies for research and composition.
   "Since the introduction of inexpensive and powerful personal
computers, we have seen a change in focus away from developing
such new strategies toward finding ways to do old things faster.
This transition manifests itself in part through a change in
models.  Previously, developers worked with the model of scholar
as researching and composing author.  Recently, however, the
dominant model construes the author as typist or, even worse,
as typesetter.  Instead of enabling scholars to perform tasks
that were not possible before, today's systems emulate typewriters.
[p.933]"
   "This shift in dominant models creates three major problems.
First, the incentive for significant research and development in
computing systems is disappearing, and a major portion of resources
has been diverted into the enhancement of a minor portion of the
document development process. ... Second, developers and authors
have lost sight of the fact that there are two products in the
electronic development of a document: the printout and the
`source' file.  Currently, everything is directed toward producing
the printout; the source is a mere by-product, wasting valuable
disk space, useful for little more than producing another printout
of the same document.  Proprietary formats and the lack of semantic
and pragmatic coding make these files useless for sharing with
colleagues or processing by intelligent systems.  Finally, scholars'
time and energy are diverted from researching and composing to
formatting for final presentation. ... Current systems tend to
focus authors' attention on appearance all of the time, not just
when the document is ready for submission. [p.934]"
  It is proposed that use of descriptive markup procedures will
alleviate this matter.  The remaining presentation is on the
following lines:
   MARKUP THEORY
      Types of Markup
         Punctuational (has to do with punctuation style being interpreted for presentation cues)
         Presentational (presentation mimics layout of source)
         Procedural (as in nroff)
         Descriptive (describes what an element is, not what the formatter should do)
         Referential (including definitions and material from elsewhere)
         Metamarkup (controlling and extending interpretations of markups)
      Markup Handling
         reading (by humans)
         formatting
         open-ended (e.g., information retrieval)
      Exposed, Disguised, Concealed and Displayed
         Exposed markup is as stored in the source
         Disguised markup is kept behind a special mark
         Concealed markup is hides it all, showing presentation form only
         Display of markup is typically separately, as in Interleaf's margin
         Development systems should provide maximum flexibility with all four modes of viewing
   MAINTAINABILITY
         Descriptive markup is least vulnerable to obsolescence and changed style rules
         Procedural markup is the next most maintainable, but not the best, because ambiguous
         Presentational markup is much more difficult for text processing reasons
   DOCUMENT PORTABILITY
         Descriptive markup provides an immediate solution to document incompatibility
         In the worst case, syntax differences may be resolved by trivial programs
      Advantages
      Alternatives to Portability
      Portability Not Dependent on a Standard
   MINIMIZATION OF COGNITIVE DEMAND
      Basic Theory
   CONTENT ORIENTATION
   COMPOSITION ASSISTANCE AND SPECIAL PROCESSING
      Alternative Views of a Document
      Outlining and Stucture-Oriented Editing
   CONCLUSIONS

%A S. Carmody
%A W. Gross
%A T. H. Nelson
%A D. RIce
%A A. van Dam
%T A hypertext editing system for the /360
%P 291-330
%B Pertinent Concepts in Computer Graphics
%E M. Faiman
%E J. Nievergelt
%I University of Illinois Press
%C Urbana, IL
%D 1969
%O cited by Coombs, Renear, and DeRose

%A C. F. Goldfarb
%T A generalized approach to document markup.
%P 68-73
%B Proceedings of the ACM SIGPLAN-SIGOA Symposium on Text Manipulation
%I ACM
%C New York
%D 1981
%O Portland, Oregon, June 9-10, 1981
%O Also Annex A to ISO 8879-1986 (E).

%A B. K. Reid
%T A high-level approach to computer document formatting
%P 24-30
%B Proceedings of the 7th Annual ACM Symposium on Programming Languages
%I ACM
%C New York
%D 1980
%O Las Vegas, Nevada, June 1980
%O Cited by Coombs, Renear, and DeRose
%K SCRIBE

		/* end of HYP8803B.BIB */


--
	-- orcmid {uucp: ... !rochester!sjfc!deh0654
		   vanishing into a twisty little network of nodes all alike}