Notes by Howard Gardner
The original metaphor for each of the several intelligences was that of a computer, or a computational device. I sought to convey that that there exist different kinds of information in the world—information deliberately more abstract than a signal to a specific sensory organ—and that the human mind/brain has evolved to be able to assimilate and operate upon those different forms of information. To be more concrete, as humans we are able to operate upon linguistic information, spatial information, musical information information about other persons and so on—and these operations constitute the machinery of the several intelligences.
Even at the time that the theory was conceived—around 1980—I was at least dimly aware that there existed various kinds of computational processes and devices. And by the middle 1980s, I had become aware of a major fault-line within the cognitive sciences. On the one hand there are those who (in the Herbert Simon or Marvin Minsky tradition) think of computers in terms of their operating upon strings of symbols—much like a sophisticated calculator or a translator. On the other hand, there are those who (in the David Rumelhart or James McClelland tradition) think of computers in terms of neural networks that change gradually as a result of repeated exposure to certain kinds of data presented in certain kinds of ways. A fierce battle ground featured rival accounts of how human beings all over the world master language so efficiently—but it eventually has played out with respect to many kinds of information.
Fast forward thirty years. Not only do we have computational devices that work at a speed and with amounts of information that were barely conceivable a few decades ago. We are also at the point where machines seem to have become so smart at so many different tasks—whether via symbol manipulation or parallel distributed processing or some other process or processes—that they resemble or even surpass the kinds of intelligence that, since Biblical times, we have comfortably restricted to human beings. Artificial intelligence has in many respects (or in many venues) become more intelligent than human intelligence. And to add to the spice, genetic manipulations and direct interventions on the brain hold promise–or threat—of altering human intelligence in ways that would have been inconceivable…. except possibly to writers of science fiction.
In an essay “Three Cognitive Dimensions for Tracking Deep Learning Progress,” Carlos Perez describes the concept of AGI—self-aware sentient automation. He goes on to delineate three forms of artificial intelligence. The autonomous dimension reflects the adaptive intelligence found in biological organisms (akin to learning by neural networks). The computation dimension involves the decision making capabilities that we find in computers as well as in humans (akin to symbol manipulation). And the social dimension involves the tools required for interacting with other agents (animate or mechanical)—here Perez specifically mentions language, conventions, and culture.
These three forms of artificial intelligence may well be distinct. But it is also possible they may confound function—what a system is trying to accomplish—and mechanism—how the system goes about accomplishing the task. For instance, computation involves decision making—but decision making can occur through neural networks, even when intuition suggests that it is occurring via the manipulation of symbols. By the same token, the autonomous intelligence features adaptation, which does not necessarily involve neural networks. I may be missing something—but in any case, some clarification on the nature of these three forms, and how we determine which is at work (or in play), would be helpful.
Returning to the topic at hand, Perez suggests that these three dimensions map variously onto the multiple intelligences. On his delineation, spatial and logical intelligences align with the computational dimension; verbal and intrapersonal intelligences align with the social dimension; and, finally, the bodily-kinesthetic, naturalistic, rhythmic-musical, and interpersonal intelligences map onto the autonomous dimension.
I would not have done the mapping in the same way. For example, language and music seem to me to fall under the computational dimension. But I applaud the effort to conceive of the different forms of thinking that might be involved as one attempts to account for the range of capacities of human beings (and, increasingly. other intelligent entities) that must accomplish three tasks: carry out their own operations by the available means; evolve in light of biological and other physical forces; and interact flexibly with other agents in a cultural setting. I hope that other researchers will join this timely effort.
I thank Jim Gray and David Perkins for their helpful comments.
To see the complete article by Carlos E. Perez, please click here: https://medium.com/intuitionmachine/deep-learning-system-zero-intuition-and-rationality-c07bd134dbfb