raburns@sun.com (Randy Burns) (11/07/89)
I finally got around to reading The Engines of Creation the other day (my first copy that I got a couple years ago got lost when I was only 1/4 through it). I have couple of questions: 1) who is the most thoughtful critic of the feasibility of nanotech? (And don't tell me J. Rifkin!) Have the potential problems been explored by anyone with some substantial engineering experience and put together in a cogent fashion? 2) I got the distinct impression that Drexler was expecting some substantial breakthroughs in artificial intelligence. Having worked at Teknowledge, I'm now rather skeptical of this. How important would AI really be to make nanotech work? 3) What are the likely intermediate steps towards the construction of an assembler? I somehow have trouble imagining an existing government or corporation funding the creation of an assembler since the consequences are so unpredictable- who might be organized to put a few million dollars into this? (The best I can think of us maybe one the the more recent microcomputer software millionaires doing it as a way to be remembered). 4) What kind of literature has been written on the social consequences of nanotechnology? Thanks! [1. A week ago the first real conference on nanotechnology was held. There was a panel discussion on just this question. Everybody there seemed to see a pretty clear path to nanotechnology. 2. AI is not terribly important, though we will need an easily foreseeable advancement in CAD/CAM and simulation capabilities. No AI papers, for example, were given at the conference. 3. Proto-assemblers, as envisioned by Drexler, would be something like a custom (chemically synthesized) molecule used as a tip on the end of an STM probe, used to do site-by-site catalysis of chemical reactions on a workpiece molecule. 4. There were three talks and a panel discussion on the subject at the conference. I'll be posting summaries and my own comments on the whole thing soon. --JoSH]
landman@hanami.eng.sun.com (Howard A. Landman x61391) (11/08/89)
In article <Nov.6.18.08.19.1989.4224@athos.rutgers.edu> raburns@sun.com (Randy Burns) writes: > 2) I got the distinct impression that Drexler was expecting some > substantial breakthroughs in artificial intelligence. Having > worked at Teknowledge, I'm now rather skeptical of this. How > important would AI really be to make nanotech work? JoSH responds: > 2. AI is not terribly important, though we will need an easily foreseeable > advancement in CAD/CAM and simulation capabilities. No AI papers, > for example, were given at the conference. As a CAD professional and an attendee at the conference, I can perhaps shed some light on this point. I, too, found distressing the way EoC cavalierly glossed over CAD and DA problems for nanotech with the argument that super-AI would solve all those problems for us. Very little industrial-strength CAD is done using AI-based tools today, and the fraction of AI in a field like that tends to *decrease* as the field matures. For example, computer Chess used to be an AI topic but is now merely an engineering topic, a fact which seems to give many AI people heartburn. All the programs in the latest international computer Chess championship were written in C, and many of them had special-purpose hardware. At first it might seem that existing tools for system and logic-level design would still be adequate for nanotech, at least some portions of it like molecular electronic computers. It's rather obvious that the lower levels need to be completely different. The amount of work to create the nanotech equivalent of a silicon compiler is immense. However, I've done some experiments which indicate that perhaps even the higher levels of present-day tools are inadequate. For example, I created a dummy technology in which Fredkin gates were cheap and fast but normal logic gates (nand, nor, invert) were expensive and slow. Using one of the best commercial logic synthesis tools, I tried to synthesize a circuit using this technology. It made no use whatsoever of the Fredkin gates, and instead produced a netlist consisting entirely of ordinary gates. This indicates to me that substantial theoretical work still needs to be done in the area of logic synthesis for conservative/reversible logic, before we can design large systems using such technologies. I know of no one (besides myself) who is even aware of this problem, let alone working on it. And this is just one *small* area of CAD for nanotech. In electronics, CAD tools tend to lag about one generation behind the hardware. That is, today's tools are perfect for the system you built a couple years ago, but they never quite handle what you need for *today's* design. Also, support for mainstream technologies is always better than that for fringe technologies. Even today, design tools for ECL and GaAs are more limited than those for CMOS. In the early days of nanotech, I do not expect CAD to be at all well developed. Only after nanotech surpasses existing technologies in volume will there be sufficient incentive for the tools to become powerful, stable, elegant, and well-integrated. Howard A. Landman landman%hanami@eng.sun.com
honavar@goat.cs.wisc.edu (Vasant Honavar) (11/10/89)
In article <Nov.7.16.24.43.1989.22107@athos.rutgers.edu> landman@hanami.eng.sun.com (Howard A. Landman x61391) writes: > > >I, too, found distressing the way EoC cavalierly glossed over CAD and DA >problems for nanotech with the argument that super-AI would solve all those >problems for us. Very little industrial-strength CAD is done using AI-based >tools today, and the fraction of AI in a field like that tends to *decrease* >as the field matures. It is perhaps more accurate to say that as the field matures, what used to be called "AI" once tends integrated into standard computer programming practice. This has happened with expert systems, for example. > >In electronics, CAD tools tend to lag about one generation behind the hardware. >That is, today's tools are perfect for the system you built a couple years ago, >but they never quite handle what you need for *today's* design. All the more reason to exploit AI - especially learning programs that are designed to be trained on a variety of problem domains. Such programs can potentially be taught to handle current technology, just as a skilled engineer adapts himself to new technological or scientific developments. ________________________ Vasant Honavar Computer Sciences Dept. University of Wisconsin honavar@cs.wisc.edu
brianm@cat50.cs.wisc.edu (Brian Miller) (11/10/89)
In article <Nov.7.16.24.43.1989.22107@athos.rutgers.edu> landman@hanami.eng.sun.com (Howard A. Landman x61391) writes: >Very little industrial-strength CAD is done using AI-based >tools today, Today's CAD technology is relatively embrionic. More adolescent CAD technology will probably integrate approaches to problems provided by *many* facets of comp sci, engineering, and information theory. This includes AI. When one passes judgement on CAD's employment of AI techniques, he must realize that the phenomenon he is observing has been hastily implemented. >...and the fraction of AI in a field like that tends to *decrease* >as the field matures. Doubt it, seriously. As a tool, AI is ideally suited for tackling design problems. >For example, computer Chess used to be an AI topic but >is now merely an engineering topic, It is true that the fastest and best chess mahines are devoloped from the hardware up, but hardware design is itself a fruitful playingfield for AI. Let's keep _that_ a secret! :{) >All the programs in the latest international computer Chess >championship were written in C, The language used does not always limit the approach to a problem. AI can be implemented in C if the software engineer feels most comfortable with C. Any language would do. Afterall, AI is an abstract method, a partial one with respect to solving a solution, and it may be harnessed with any language. C is favored for its structure, flexibility, and proximity to the host system. It's selection is purely an implementation decision. >In electronics, CAD tools tend to lag about one generation behind the hardware. >That is, today's tools are perfect for the system you built a couple years ago, >but they never quite handle what you need for *today's* design. Also, support >for mainstream technologies is always better than that for fringe technologies. >Even today, design tools for ECL and GaAs are more limited than those for CMOS. Yeah, there's a terrible lag between the development of technology and the creation of CAD tools to harness it. It's like being hungry and realizing that you have to make it to the 'frige before you can get down to business. :) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> brianm.
landman@hanami.eng.sun.com (Howard A. Landman x61391) (11/21/89)
>In article <Nov.7.16.24.43.1989.22107@athos.rutgers.edu> landman@hanami.eng.sun.com (Howard A. Landman x61391) writes: >>Very little industrial-strength CAD is done using AI-based >>tools today, and the fraction of AI in a field like that tends to *decrease* >>as the field matures. In article <Nov.9.18.10.52.1989.5434@athos.rutgers.edu> honavar@goat.cs.wisc.edu (Vasant Honavar) writes: > It is perhaps more accurate to say that as the field matures, > what used to be called "AI" once tends integrated into > standard computer programming practice. This has happened with > expert systems, for example. I recognize that this happens, but that's not what I'm not talking about. Generic techniques such as AI are best when you don't really understand the problem you're trying to solve. Once you understand it well, you typically spend most of your execution time in a tight loop executing a well defined algorithm. This is true of simulation, timing analysis, finite-element analysis, geometric rule checking, etc., etc. Applying AI to these problems makes about as much sense as applying AI to computational fluid dynamics or interactive 3-D graphics; it's like using a coping saw to rip plywood. Where AI may be able to help is in the development of new algorithms, and in high-level design exploration. But I really doubt that the bulk of actual CAD/DA computation will be spent on "AI" programs. Howard A. Landman landman%hanami@eng.sun.com