By Carver Mead Joel Dreyfuss Caltech professor Mead, 55, is developing microchips with neural networks that imitate the way the mind processes information. Joel Dreyfuss spoke with him.

(FORTUNE Magazine) – The pioneers of artificial intelligence had in mind systems that are really intelligent in the way people and animals are -- systems that could see and hear. It turned out that higher-level tasks like playing chess were much easier than lower-level tasks like recognizing an object. We've gone through an increase of a factor of ten million in computing power and in 30 years we still haven't really solved the computer vision problem. We're gradually getting to a technology that does promise to address some of those problems. As we become able to build systems that mimic the analytic processes of the nervous system, we're going to see another wave of real artificial intelligence. A robot welding a car doesn't feel around until it's got exactly the right place to weld. It just reaches out and grabs onto whatever is there. When people weld, they work around and get it just right. You could imagine all kinds of manufacturing processes where the machine adapts to the situation and doesn't gouge things up. Anytime you have a real-world environment, you're faced with real data coming in raw: vision data, auditory data. Unless you're in a totally controlled environment, those data are extremely noisy and you can't handle them with traditional computers. The first thing that neural networks will do is just handle those kinds of data better. There's no question in my mind that we will have machines that understand speech, that understand the world by looking at it. We're going to be able to do that, but it won't be tomorrow. Meanwhile, we'll have small steps. There was nothing wrong with the original vision. We just didn't have the technology to do it.