Marvin Minsky is worried that after making great strides in its infancy, AI has lost its way, getting bogged down in different theories of machine learning. Researchers “have tried to invent single techniques that could deal with all problems, but each method works only in certain domains.” Minsky believes we’re facing an AI emergency, since soon there won’t be enough human workers to perform the necessary tasks for our rapidly aging population.
So while we have a computer program that can beat a world chess champion, we don’t have one that can reach for an umbrella on a rainy day, or put a pillow in a pillow case. For “a machine to have common sense, it must know 50 million such things,” and like a human, activate different kinds of expertise in different realms of thought, says Minsky.
Minsky suggests that such a machine should, like humans, have a very high-level, rule-based system for recognizing certain kinds of problems. He labels these parts of the brain “critics.” When one critic gets selected in a particular situation, the others get turned off. In the “cloud of resources” that comprises our mind, mental states, from emotions to reasoning, result from activating or suppressing the right resource. Minsky further refines his machine’s reasoning architecture with six levels of thinking that attempt to emulate the different kinds of reasoning humans may engage in, often simultaneously: These include learned reactions, deliberative thinking, and reflective thinking, among others. A smart machine must have at least these levels, he says, because psychology, unlike physics, doesn’t lend itself to a minimal number of laws. With at least 400 different areas of the brain operating, “if a theory tries to explain everything by just 20 principles, it’s doing something wrong.”
Vodpod videos no longer available.
read all at MIT world