[comp.edu] conceptual basis of CS

jsnyder@june.cs.washington.edu (John Snyder) (05/03/88)

I recently set myself the task of making a list of 5-10 fundamental
concepts of computer science.  The sort of idea I was after had to
(1) be such that current CS theory/practice would be inconceivable
without it and (2) it crops up (maybe in various disguises) in all
or many areas of computer science.  Notice that I didn't say anything
about historical importance, elegance, usefulness or status outside CS,
or future potential.  Clause (2) pretty well guarantees that it's
not going to be tied to any particular technology.

I would really like to see your list, perhaps with a brief comment
as to why you chose each item.  I came up with my own list
of 6 concepts, but I'll refrain from posting it so as not to 
unconsciously influence your own thinking.


jsnyder@june.cs.washington.edu              John R. Snyder
{ihnp4,decvax,ucbvax}!uw-beaver!jsnyder     Dept. of Computer Science, FR-35
                                            University of Washington
206/543-7798                                Seattle, WA 98195

dickey@ssc-vax.UUCP (Frederick J Dickey) (05/04/88)

In article <4823@june.cs.washington.edu>, jsnyder@june.cs.washington.edu (John Snyder) writes:
> I recently set myself the task of making a list of 5-10 fundamental
> concepts of computer science.  The sort of idea I was after had to
> (1) be such that current CS theory/practice would be inconceivable
> without it and (2) it crops up (maybe in various disguises) in all
> or many areas of computer science.  
> 
> I would really like to see your list, perhaps with a brief comment

It strikes me that a very, very fundamental concept of CS is the notion
that computers perform manipulations on data. This seems to manifest
itself in various guises. For example, Wirth's 
              Programs = Algorithms + Data structures,
(hope I got it right).

Essentially the same view is taken in AI
          Expert system = Inference engine + Knowledge representation,
except that different terminology is used for basically the same thing.

Digital computer architecture has the notion
          Instruction = Op code + Addresses of data.

One of the reasons I claim that this is a fundamental concept of CS is that
it is not clear to me that it is the only possible view of a computer,
and thus it is a distinguishing characteristic of CS.
For example, a computer might be viewed as a device that transforms
inputs into outputs. From this viewpoint, it is not clear that you
need algorithms and data structures. To elaborate further, consider
a "learning" neural net. To program this device one does not design
algorithms and data structures, but presents it with a series of
input/output relationships. The neural net may have something 
corresponding to algorithms and data structures internally, but you're 
not concerned with these. It seems to be the case that a neural engineer
is not nearly as concerned with algorithms and data structures as a
software engineer.

mark@hubcap.UUCP (Mark Smotherman) (05/05/88)

I'll submit two items for your list: naming and binding.

Naming choices and binding actions underlie the design and operation of
translators and interpreters.  And what is a computer if not an interpreter?
Likewise, translation is ubiquitous function when dealing with computers.

Naming consists of choosing the properties of the name space in which
actions and objects can be identified.  The major issues are:

* ordering -- Ordered names allow operations to be performed on one valid
  name to obtain another valid name.  E.g. semantics of the PC in the CPU,
  array addressing, linear segmentation (IBM).  Unordered names provide a
  measure of protection but must be managed by a naming table (or equivalent).
  E.g. symbolic segmentation (MULTICS).

* range and resolution

* homogeneity -- Does the name space identify objects or actions of identical
  type, or are multiple types allowed in the same name space?  E.g. general
  register set vs. function-specific registers, isolated I/O vs. memory-mapped
  I/O.

* sharing/conflict -- A particular name may be shared by several name spaces,
  and the mapping of this name may involve renaming (e.g. relocation) or
  context search rules (e.g. scope).

* alias/synonym

* dimensionality

* side effects -- E.g. autoincrement addressing.

Binding is related to naming since the mapping of a name space is an instance
of binding.  Yet binding is a more general concept: the specification of the
value of an attribute of an object or action.  Binding can be static or
dynamic, and early or late.  There are several binding times:

* language design time
* translator design time
* program design time
* preprocessor/macro processor time
* conditional assembly time
* compile time
* link time
* load time
* task initiation time
* dispatch time
* dynamic link time
* swap time/page fault time
* procedure call time (actual for formal parameters)
* execution time

Dynamic binding depends on the presence of a naming/attribute table (or
equivalent).  E.g. virtual memory page/segment tables, cache tags, the
load/stores in the code segment that manage the dynamic binding of the
register set, the link fields in a linked list, the schema in a database,
the RLD (relocation dictionary) for linking and loading, the ESD (external
symbol dictionary) for linking, the static links/display registers for an
activation record stack (static binding of variables but dynamic binding
of procedure calls).

(I tried to address these concepts, especially as they apply to a computer
organization course, in a paper given at ACM SIGCSE, St. Louis, 1987.  You
can find there several references to the work of other authors.)

-- 
Mark Smotherman, Comp. Sci. Dept., Clemson University, Clemson, SC 29634
INTERNET: mark@hubcap.clemson.edu    UUCP: gatech!hubcap!mark