[comp.ai] Network Complexity

mth@ihnp3.UUCP (02/20/87)

(sorry if this is a duplicate posting, but my machine burped)

I am in the process of putting together a paper which attempts to
motivate planning and development of software based Network Management tools.
In general, such tools would be both STRATEGIC, eg. network topology planning,
from the perspective of capacity, security, fault tolerance, etc, and
TACTICAL, such as visualization of network activity, dynamic routing, fault
recovery, congestion avoidance, etc.  Clearly, this fields is ready for
and in desperate need of AI, which is why I'm addressing it to this group.

Now, my intuition tells me that as these networks become more complicated,
we'll realize that the seat-of-the-pants network management we're used to is
inadequate, and we'll wish that we had spent time developing the right tools
to do the job.  I envision the complexity of our computer networks to be
exploding at some exponential rate, while our ability to understand and
control them is falling behind, growing relatively slowly.  This brings
me to my QUESTION:

	If we define the "complexity" of a computer network as a
	measure of difficulty in observing, understanding, and
	excercising a modicum of control over it, how is this
	"complexity" estimated?

	If we further choose a simple but intuitive way of representing
	a computer network by a graph, how do we quantify this "complexity"
	with respect to the graph's topology?

Clearly metrics such as the number of nodes, edges circuits etc have intuitive
appeal, but do not individually seem to convey the underlying combinatorial
explosion that, I believe, lurks underneath.

Are you aware of any analytic, graph-theoretical, heuristic, empirical,
or otherwise useful metrics of such "complexity"?  I am not necessarily
looking for some absolute measure of the thing, but general concepts.
Any facts, comments, opinions and thoughts will be most appreciated.

					M. Horbal
					@ Bell Labs
					ihnp4!ihnp3!mth
					(312) 979-6496

belmonte@svax.UUCP (02/25/87)

In article <292@ihnp3.UUCP> mth@ihnp3.UUCP (Mark Horbal) writes:
>	If we define the "complexity" of a computer network as a
>	measure of difficulty in observing, understanding, and
>	excercising a modicum of control over it, how is this
>	"complexity" estimated?
>
>	If we further choose a simple but intuitive way of representing
>	a computer network by a graph, how do we quantify this "complexity"
>	with respect to the graph's topology?
I believe there might be another area which is relevant to the problem you
mention in the second statement above, but not the first.  A year ago I was
doing an internship at NRL implementing a transition-network parser for some
context-free grammars which mimicked *small* subsets of English.  The
question occurred to me, "how does one quantify the complexity of the
transition networks we generate?"  (By "complexity" here I mean topics such as:
Do we have a lot of long paths consisting of nonterminals which will result in
many failed parses?  Do we have many null transitions that we can't squeeze out
by munging adjacent states together?  etc.)  The answer I got was, well, we
don't really know of any method of completely characterising such complexities.
Is this the same sort of problem as mentioned above, or am I completely
off-base?
Disclaimer:  Yes, I know I'm extraordinarily weak on theory, but I'm a lowly,
simple-minded freshman, so I have an excuse.
-- 
"When you've got them by the balls, their hearts and minds will follow."
 -- a member of the Nixon administration
Matthew Belmonte
Internet:  <belmonte@svax.cs.cornell.edu>
BITNET:  <d25y@cornella> <d25y@crnlvax5>
UUCP:  ..!decvax!duke!duknbsr!mkb