[comp.society.futures] *IF*? a poor rebutal

36_5130@uwovax.uwo.ca (Kinch) (07/05/88)

In article <58.22CCF148@isishq.UUCP>, doug@isishq.UUCP (Doug Thompson) writes:
> 
>  
>  J> From: jimmyz%oak.dnet@VLSI2.EE.UFL.EDU (Anubis The Psychic Chaos 
>  
>  J> What leads you to believe the human mind is not a complex computer? 
>  J> Computers of are current time have a level of complexity no human 
>  J> device has 
>  J> EVER achieved before, and they are just a granule of what the 
>  J> human mind is. 
>  J> Or rather, the brain. THe brain is undoubetedly a complex computer, 
>  J> but the 
>  J> mind is a non-tangible thing. Just as this VAX 8600 I am using 
>  J> now is a complex 
>  J> (supercomputers and the like aside) computer, but the programs 
>  J> I am using to 
>  J> send this message have absolutely no physical substance. 
>  J> I'd say there is a heck of a lot of similarity. 
>  J>  
>  
> Agreed, there is similarity. In the same way there is similarity between 
> the sun and a lightbulb. We have been making bigger and better lighbulbs 
> for quite a while now. Can we make one just like the sun? 
	
	What does "just like the sun" mean?

>  
> Well, in this case we happen to *know* there are differences as well as 
> similarities. We can't assemble enough matter, at least on earth, to 
> build something just like the sun. Our lighbulbs, though similar, do not 
> use the same sort of natural processes as the sun. 

	Maybe not but "OUR" H-bombs come pretty damn close for my likeing.

>  
> It is not a logically sound argument to say that because our technology 
> is advancing it will ever reach any given goal. The evidence of 
> advancing technology does not *prove* anything at all. 

	True but to ignore that evidence is rather silly.

>  
> It is my hypothesis that there are fundamental differences between the 
> way organic thought operates in a human creature - that is to say, human 
> intelligence, and the totally logical, mathematical, effective 
> procedures which by definition are machine intelligence. 

	completely agree, here here.

>  
> To build a good human intelligence out of silicon we would have to 
> minimally understand human thought and intelligence quite well. This is 
> really one of the more itneresting parts of AI research today, because 
> the understanding of human intelligence is not really a "mechanical" 
> problem. Heck, I don't even understand *myself*! 
>  
> A good example is provided by chess programs. The intelligence required 
> to play chess is among the most mechanical and methodical. Machines can 
> play very good games of chess too. But they don't do it the same way 
> people do. They do it by making millions of calculations, and we *know* 
> that is not how people do it. People use something like intuition and 
> pattern recognition which we know to be quite intependent of any 
> numerical analysis or number crunching. Computers play chess by doing an 
> immense amount of arithmetic. As a calculator, the human brain is really 
> quite slow. Something else is going on.  
>  
> Thus a quantum leap in technology is needed, a different kind of 
> computer, to even begin to process data of any sort (even mathematical 
> data) the way the human mind processes data. 

	Or at least a different way to program the same old computers.
I think that your "quantum leap in technology" is a quantum leap in Logic.
But I certainly agree that a new way to mimic thinking is needed.

>  
> We know that any AI problem is highly dependent on input. Now the input 
> into the "computer in my skull" comes through my eyes and my ears and my 
> fingers and toes, my nose and my mouth. To call an intelligence human, 
> it would need the same input spectrum.

	Or a subset of these, many humans work on relatively small subsets
of these inputs.

> Of course people are working on 
> computer smell and tactile sensors, and you might one day mimic the 
> whole human sensorium, and create a mechanical copy of a human's entire 
> experience. You might get a computer to react just like a man to a 
> beautiful sunset, a starving child, or a girl in a bikini. You might get 
> a computer pondering ethical problems and answering questions about the 
> relative merits of marxism vs capitalism as a social order. You really 
> might one day be able to do that (though I seriously doubt it), but God 
> forbid that anyone would *want* to! 

	Really! Why should GOD forbid this?

>  
> I've confused the issue between the can do and the should do. But the 
> should do speaks to the can do. What is a man? What is a machine? Do you 
> really honestly think that the indisputable similarities add up to a 
> potential identity? It strikes me as preposterous, and I am searching 
> for a language to articulate why that is.  

	But you obviously have not found that language.

>  
> The very fact that making mechanical men is something we should not be 
> doing from a moral and ethical perspective suggests to me that we 
> probably can't.

	Another huge leap, you didn't show me that we should not be doing 
this from either a moral or ethical perspective, but even if you had how 
this shows or even suggests that we proably cant is beyond my understanding.
I do not even think that they are connected. We should not be murdering 
other humans from a moral and ethical perspective but this obviously does 
not mean that we CANT.

>  Why is that? It's hard to be precise, but think of the 
> sort of human intelligence manifested as a mother holds her newborn to 
> her breast to suckle.  Think about all the complex web of social, 
> emotional, political, relational and economic input into the behaviour 
> involved, (and there are two human intelligences to take into account in 
> this behaviour which is highly relational in nature) and then try to 
> think of a way to program a computer of any hypothetical power to mimic 
> it.  
>  
	
	So are you saying that because YOU cant think of a way to program 
these types of behavior into a computer it means that this is an impossible 
task? 

	I am not an AI expert or really an expert in anything but I think
that I can spot faulty logic and poor reasoning when I see it, this is the
result of my spying your posting.

Dave Kinchlea
CCS Program Consultant
University of Wester Ontario
London, Ontario,
Canada.