[comp.sw.components] Ted Dunning's flamage

billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) (05/08/89)

From article <TED.89May6221947@kythera.nmsu.edu>, by ted@nmsu.edu (Ted Dunning):
> [...] it will take every known
> technique known to the software industry (not just componentry), to
> make ada programs reliable.  The are incorrect when they assert that
> ada is a good (or even reasonable) tool for writing and maintaining
> software in general, and large systems in particular.
> 
> The particular problem is only a single aspect of the basic problem
> with the ada movement which is that the impetus is being provided by
> people who are technically incompetent in the field.

   I'll leave these asinine comments to others; the areas of uncertainty
   regarding the Unisys STARS research will be addressed in this article.
 
> 	  o The Reuseability Library Framework is a set of building
> 	    blocks and tools for building, maintaining, and accessing
> 	    knowledge-based libraries of Ada software components.
> 
> what is a knowledge-based library other than a buzz-word?  the madison
> avenue implication here is that the library actually understands what
> it contains and will help you find what you need.  _right_

   Not some Madison Avenue hype, but the actual project objective:

      Clear and sound classification schemes are becoming essential 
      for effective access to the increasing body of publicly available
      Ada components.  Unisys believes that the most dramatic productivity
      gains in reuse will be realized through the development of libraries
      of components for specific domains, and structured according to
      explicit _domain_models_.  Such models can be realized as detailed
      taxonomies of the opjects and operations which are pertinent to the
      application domain to be served by the repository.  Different domains
      require different semantic attributes for cataloging and retrieving
      components, and the taxonomy used in each domain will evolve with
      usage over time.  Thus, it is important to realize that there is 
      no single "right" classification scheme for software reuse; what is
      needed are model-building tools which can be used in creating 
      libraries for different domains.

      The objective of the Reuseability Library Framework project is to
      develop this essential knowledge-based foundations technology for
      building _intelligent_librarians_, and to demonstrate the use of
      this technology by building an example library for the domain of
      Ada benchmark tests.  Through the use of knowledge-based techniques,
      relevant knowledge about the domain of the software to be reused
      can be embedded within the library itself; library tools can access
      this embedded knowledge to provide active, "intelligent" guidance
      to users in focused search and retrieval of library components, or
      qualification of candidate components.  As a secondary goal, we are
      developing the library framework as a set of reuseable, separately
      selectable components that will support the integration of 
      knowledge-based techniques in Ada tools and applications outside
      the library domain. 

> 	  o The base set of tools supports the creation and maintenance
> 	    of "intelligent software librarians": repositories of reuseable
> 	    Ada components organized around particular application
> 	    domains.
> 
> of course it doesn't support this at all, since no such "intelligent
> software librarians" (quote marks are apt here aren't they) exist to
> be maintained, and noone currently knows how to create them.

   On the contrary:

      The initial librarian design will be a prototype that supports
      interactive search through a particular library taxonomy.  Component 
      retrieval using the librarian is analogous to successive query
      refinement within database models.  As the search through the
      library proceeds, AdaTAU rules [note: AdaTAU is an expert system]
      provide focusing guidance at choice points.

> 	  o Library tools access this embedded knowledge to provide
> 	    active "intelligent" guidance to users in focused search
> 	    and retrieval of library components.
> 
> funny how suddenly the ada community has acheived what ai has failed
> to do for years.  or are we really just talking about doing key-word
> or full text search of abstracts describing each package.

   No, as described above, an expert system is guiding the search.

> 	  o Testing and qualification of candidate components for 
> 	    inclusion into the reuse library are performed prior to
> 	    cataloging the component in the library.
> 
> is this policy or mechanism?  is the testing done to the standards
> used in magellan (latest catastrophic bug found less than two weeks
> ago), or the hubble space telescope?

   It's a mechanism:

      [The test plan assistant] is motivated by two factors, the need
      to support black box testing of Ada modules, and the parts 
      engineering and qualification process for Ada parts to be inserted
      into reuseability libraries.  [It] requires a network-based  
      structural model of an Ada subprogram interface, and using actions
      provided by AdaTAU rule bases, queries the user for implicit
      assumptions about the unit under test (such as parameter
      interdependencies).  [It] produces a structured set of test
      specifications where the set has been pruned by built-in heuristics
      and user-specified choices.

      In addition to a direct representation of the Ada type hierarchy
      as a semantic network via AdaKNET, [it] will utilize testing
      heuristics associated with various data types, as well as general
      rules drawn from the experiences of veteran Ada programmers.  The
      knowledge base built to support [the test plan assistant] should
      be reuseable by other "smart" tools, and [the test plan assistant]
      will itself be absorbed in the Librarian systems where additional
      semantic information may be available from the library domain model.

 
> [...]
> 			     \           /
> 			 Hybrid Knowledge Representation Systems
> 			     /           \
> 			    /             \
> 			AdaKNET          AdaTAU
> 			Semantic         Rule-Based
> 			Network          System
> 
> the implication that the qualifying librarian and the test plan
> assistant are automated.  this is bullshit.

   No, this is fact.

> the bottom three items are intrinsicly vapor even if there is code
> that is claimed to do this.

   Both semantic nets and rule-based systems have been around for years,
   and thousands (if not millions) of them exist.  Your comments are 
   indicative of the great extent to which you are unaware of the field. 

> 	   o ACE has been implemented on Sun Workstations (R) running
> 	     the Unix (R) operating system.  ACE has also been ported
> 	     to the Unisys PC/IT running the MS-DOS (R) operating system.
> 
> i hear a past tense here.  this implies that much of the foregoing
> hype that i found so objectionably overblown is just that and that ace
> is basically turbo-ada.  why can't they just say this?

   Essentially because that's not true.  I have used ACE; it's much
   more like an Ada-oriented operating system environment, running
   on top of something much less appropriate to the developer's needs 
   (e.g., Unix).

   (Quotations are from public-domain Unisys documents.  The Reuseability 
   Library Framework is a project in the STARS Databases/Reuseability 
   Foundations Area, and is being funded by Contract #N00014-88-C-2052, 
   U.S. Office of Naval Research...)


   Bill Wolfe, wtwolfe@hubcap.clemson.edu

ted@nmsu.edu (Ted Dunning) (05/09/89)

In article <5421@hubcap.clemson.edu> billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) writes:


	...
      Not some Madison Avenue hype, but the actual project objective:

	...

	 The objective of the Reuseability Library Framework project is to
	 develop this essential knowledge-based foundations technology for
	 building _intelligent_librarians_, and to demonstrate the use of
	 this technology by building an example library for the domain of
	...

      On the contrary:

	 The initial librarian design will be a prototype that supports
	 interactive search through a particular library taxonomy.  Component 
	...

note the sudden shift from the past to the future tense.

	...

	[ with regard to the automatic test plan generator ]

      It's a mechanism:

	 [The test plan assistant] is motivated by two factors, the need
	 to support black box testing of Ada modules, and the parts 
	 engineering and qualification process for Ada parts to be inserted
	 into reuseability libraries.  [It] requires a network-based  
	 structural model of an Ada subprogram interface, and using actions
	 provided by AdaTAU rule bases, queries the user for implicit
	 assumptions about the unit under test (such as parameter
	 interdependencies).  [It] produces a structured set of test
	 specifications where the set has been pruned by built-in heuristics
	 and user-specified choices.

	 In addition to a direct representation of the Ada type hierarchy
	 as a semantic network via AdaKNET, [it] will utilize testing
	 heuristics associated with various data types, as well as general
	 rules drawn from the experiences of veteran Ada programmers.  The
	 knowledge base built to support [the test plan assistant] should
	 be reuseable by other "smart" tools, and [the test plan assistant]
	 will itself be absorbed in the Librarian systems where additional
	 semantic information may be available from the library domain model.

this is fascinating; a test plan generator that doesn't need to know
what the unit under test is SUPPOSED TO DO.  this fits in well with 
the experience with the software qualification of magellan and space
telescope. 

	...
   > 
   > the implication that the qualifying librarian and the test plan
   > assistant are automated.  this is bullshit.

      No, this is fact.

see above.  on the other hand, if the qualifying librarian and the
test plan generator can independently and automatically qualify and
generate tests for arbitrary ada software, then they are automated.
if, on the other hand, they can't, then they are streamlined manual
procedures.

	...
      Both semantic nets and rule-based systems have been around for years,
      and thousands (if not millions) of them exist.  Your comments are 
      indicative of the great extent to which you are unaware of the field. 

(is this what is meant by ad hominem attacks?)

both semantic nets and rule-based systems have been around for years,
and they still have the same severe limitations which prevent them
from doing more than very limited, well specified tasks.  the task of
generating complete test suites is something that expert people still
cannot do very well, much less something that expert systems can do.
likewise with deriving useable taxonomies in novel domains.

current ai approaches work well in limited domains, or in toy
applications (credit apps, translation of weather reports).  they work
less well (but still useably) where the situation is basically
somewhat open-ended, but enormous effort can be applied over a period
of years (technical translation using systran is the canonical
example).  in situations requiring open-world reasoning of any sort,
they still basically don't function.

ACE may well be a significant advance in the state of the art in
software development, but overselling ai techniques, and
overdescribing systems that may in some degree utilize them is very
counter-productive.  (for one thing, it brings flamers out of the
woodwork).

	...
      Essentially because that's not true.  I have used ACE; it's much
      more like an Ada-oriented operating system environment, running
      on top of something much less appropriate to the developer's needs 
      (e.g., Unix).

ahhhh..... now this sounds more like what the system really is.  with
this approach, we can talk less in terms of buzz-words, and more in
terms of how dynamic the development cycle can be made and whether
test-stubbing and pre-testing of components can be supported.  let's
hear from all the satisfied users of ACE.

cmr@analogy.UUCP (Chesley Reyburn) (05/09/89)

In article <TED.89May8134732@kythera.nmsu.edu> ted@nmsu.edu (Ted Dunning) writes:
>In article <5421@hubcap.clemson.edu> billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) writes:

Dear netlanders, how about CONSTRUCTIVE criticism?

So far we have one (1) idea about reusable software. That idea
depends on ADA. I don't use ADA and probably will not use ADA
in the near future. How about ideas for C? And I don't mean
"switch to C++". Is there a generic solution? Can we describe
a methodology that does not begin with a particular language?

=============================================================
Chesley Reyburn                 ...tektronix!ogccse!cvedc!cmr
ECAE Software, Prime Computer, Inc.   ...sun!cvbnet!cvedc!cmr
14952 NW Greenbrier Parkway              ...sequent!cvedc!cmr
Beaverton, OR 97006                       Phone  503/645-2410
=============================================================

billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) (05/11/89)

From article <TED.89May8134732@kythera.nmsu.edu>, by ted@nmsu.edu (Ted Dunning):
> 	 The initial librarian design will be a prototype that supports
> 	 interactive search through a particular library taxonomy.  Component 
> 
> note the sudden shift from the past to the future tense.

    Due to the fact that I was quoting early documents.

% 	 In addition to a direct representation of the Ada type hierarchy
% 	 as a semantic network via AdaKNET, [it] will utilize testing
% 	 heuristics associated with various data types, as well as general
% 	 rules drawn from the experiences of veteran Ada programmers.  The
% 	 knowledge base built to support [the test plan assistant] should
% 	 be reuseable by other "smart" tools, and [the test plan assistant]
% 	 will itself be absorbed in the Librarian systems where additional
% 	 semantic information may be available from the library domain model.
% 
% this is fascinating; a test plan generator that doesn't need to know
% what the unit under test is SUPPOSED TO DO.  this fits in well with 
% the experience with the software qualification of magellan and space
% telescope. 

    Read carefully: the test plan generator is exploiting semantic
    information from the library domain model.  It DOES know what
    the unit under test is supposed to do.

> the task of generating complete test suites is something that 
> expert people still cannot do very well, much less something that 
> expert systems can do.  likewise with deriving useable taxonomies 
> in novel domains.

    Gee, that sounds like a good area for research, then, huh?
    Precisely the government's motivation for funding the STARS program.


    Bill Wolfe, wtwolfe@hubcap.clemson.edu

jima@hplsla.HP.COM (Jim Adcock) (05/23/89)

> ..in the near future. How about ideas for C? 
> And I don't mean "switch to C++". 

Why not switch to C++ ???

One could still write reuseable packages in C++,
and give them a C compatible calling interface
using the extern "C" construct of C++.

???

himanshu@alpha.CES.CWRU.Edu (Himanshu Rawell) (05/29/89)

In article <8630001@hplsla.HP.COM> jima@hplsla.HP.COM (Jim Adcock) writes:
>> ..in the near future. How about ideas for C? 
>> And I don't mean "switch to C++". 
>


>Why not switch to C++ ???

********* WHY NOT C? ************

Why put cart before the horse? I have implemented my entire
Master's thesis in C (35,000 lines of C code) mostly by
restructuring and modifying the existing C code. Difficult, but a
sensible and productive thing to do (Reverse Engineering). Reusability
in action (C++ => C, is reinventing the wheel).

By the way, my thesis is "A Catalog of Six Families of Reusable Software
Components". There are six components in each family. Some examples are
-   Buffer mechanism in EMACS-like text editors
-   Screen Update mechanism in EMACS-like editors.


>
>One could still write reuseable packages in C++,

Hey, why write them? Just modify the existing ones, to make them
have encapsulated interfaces. We are talking reusability. Again
one can write encapsulated components (object-oriented lingo!)
in C, by using disciplined programming. This is to avoid messing up
all the existing software and still have the reusable components around.

>and give them a C compatible calling interface
>using the extern "C" construct of C++.
>
>???