[comp.ai] am-ness, awareness, thinking

root@amsctg.UUCP (Admin) (12/12/89)

Existence (being or am-ness) precedes awareness which precedes
thinking. Physical and chemical reactions have a lot of am-ness
and no identifiable awareness or thinking. Even simple organisms
show reaction to environmental events, which could be considered
as awareness. More complicated animals have definite self-awareness
(if they did not they could not communicate territorial messages).
Thinking can be broken into two catagories: a) symbolic which can be
transferred into other computational systems b) semantic which cannot
be transferred into systems that are unaware.

As food for thought in support of my last statement, think about the easy
paradox 'the next statement is TRUE; the last statement is FALSE'. This
paradox is nonsense to unaware systems (i.e. no self-awareness). 
To an aware system (such as you!) this paradox provides a clue that 
symbolic reality (your thoughts) and physical reality (your existence)
are not identical.

When we communicate about human semantic events (e.g. being in love)
we construct symbolic messages about events that are meaningless at a
purely symbolic level yet convey important information to another
person who has experienced similar events.

Purely symbolic systems can perform phenomenal feats. Yet they don't
even have the simpliest instinct for self-preservation. Would you 
rather trust your life (paycheck, wife, children, etc.) to systems
that cannot relate to your desire for a happy continued existence, or
to your fellow man who might share some of your same ideals.

While it's wonderful to explore the limits of symbolic computation,
unless we are willing to discuss building self-aware systems the term 
artificial intelligence will reamin a misnomer.

Robert Lindsay (uunet!amsctg!root)

My own opinions! - not that of my employer!

sn13+@andrew.cmu.edu (S. Narasimhan) (12/13/89)

> Existence (being or am-ness) precedes awareness which precedes
> thinking. Physical and chemical reactions have a lot of am-ness
> and no identifiable awareness or thinking. Even simple organisms
> show reaction to environmental events, which could be considered
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> as awareness. 

>      --> If this is so consider :  A liquid boils on heating. We can say
> the liquid reacts to its surroundings.Does this mean the liquid has
> awareness? The problem is awareness cannot be measured thru physical
> activity or inactivity. Neither can "one" awareness realise another.
> This is to say one can never be 100% sure whether another physical
> object has awareness or not. We can only speculate about it. <--

> More complicated animals have definite self-awareness
> (if they did not they could not communicate territorial messages).

>          --> I don't think this is because of  awareness. It could be
> due to the instinct of survival. Imagine a computer with a program
> running. Suppose this program is such that it kills anyother program
> which is running in a computer beside this computer. Can we say this
> "killer" program has "awareness"?  <--

> Thinking can be broken into two catagories: a) symbolic which can be
> transferred into other computational systems b) semantic which cannot
> be transferred into systems that are unaware.

> As food for thought in support of my last statement, think about the easy
> paradox 'the next statement is TRUE; the last statement is FALSE'. This
> paradox is nonsense to unaware systems (i.e. no self-awareness). 
> To an aware system (such as you!) this paradox provides a clue that 
> symbolic reality (your thoughts) and physical reality (your existence)
> are not identical.

> When we communicate about human semantic events (e.g. being in love)
> we construct symbolic messages about events that are meaningless at a
> purely symbolic level yet convey important information to another
> person who has experienced similar events.

> Purely symbolic systems can perform phenomenal feats. Yet they don't
> even have the simpliest instinct for self-preservation. Would you 
> rather trust your life (paycheck, wife, children, etc.) to systems
> that cannot relate to your desire for a happy continued existence, or
> to your fellow man who might share some of your same ideals.

> While it's wonderful to explore the limits of symbolic computation,
> unless we are willing to discuss building self-aware systems the term 
> artificial intelligence will reamin a misnomer.

> Robert Lindsay (uunet!amsctg!root)

> My own opinions! - not that of my employer!

>       --> The problem is , even if ,by some freaky chance, we built a
> system that is aware of itself we won't know about it. THis is so
> because we don't yet have a test for "awareness" like Turing's test.
> First of all, we don't even have a test to tell whether our fellow human
> beings are aware of themselves are not, apart from reasoning that since
> they look like us and behave somewhat like us they have awareness.
> But,if this reasoning is accepted , then the Turing's test itself is a
> test for awareness of artificial systems. <--

         S.Narasimhan
          sn13+@andrew.cmu.edu