40 Years of Multiple Intelligences

It has been 40 years since Howard Gardner’s Frames of Mind: The Theory of Multiple Intelligences was first published in 1983. You may be interested to know that to mark this occasion, Branton Shearer has launched a MI@40 Newsletter reflections project. A regularly-published newsletter includes reflections on the impact of multiple intelligences around the world.

If you would like to subscribe to the MI@40 newsletter, or to obtain past issues, please use this link. You may also email Branton at sbranton@kent.edu.

Rise and Fall of IQ

By Howard Gardner

A recent article (link here) by John Anderer asks “Are we growing more dumber?” [sic]. Many readers will have seen similar headlines. The fact that people can drop on some dimensions but not on others shows that IQ is not a single monolith. The finding should not surprise anyone who is sympathetic to “MI perspectives.” After all, there is no reason to think that when one measure of intelligence goes up—or goes down—the others will necessarily move in the same direction as well.

As this article points out, around the world IQ has been rising steadily over recent decades, especially on spatial measures—presumably because our lives are enmeshed is various kinds of visual and technological entities, most of which call on spatial capacities. As we live in a world that is increasingly enmeshed in “artificial intelligence,” devices, and algorithms, we can expect similar shifts in profiles of intelligence.

What will algorithms like ChatGPT do to our personal intelligences? For now, this remains a topic for speculation—if not science fiction—but for how long is difficult to assess. We may decide to attribute personal knowledge to algorithms; we may decide to deny them that form of knowledge; or the algorithms may make their own decision!

Photo by ALAN DE LA CRUZ on Unsplash

Howard Gardner on the Work Healthy Podcast

Howard Gardner was recently interviewed on Work Healthy, a podcast hosted by John Ryan that aims to provide “access to the world of healthy workplaces, digging deep to uncover the practices and approaches used by organizations worldwide in their attempts to rewrite the rules of the workplace as we know it.

In this episode titled “Rethinking Intelligence,” Gardner discusses his theory of multiple intelligences, the value of unlearning and relearning, and what it means to foster a synthesizing mind.

“Synthesizing takes time,” he says. “And if I can make any educational contribution going forward, I would like teachers—and that doesn’t just mean teachers at school, I mean teachers at home, your parents, your family, teachers in the workplace, your managers, your boss, your colleagues—to try to help people develop better capacities for synthesis.”

Gardner also reflects on criticism of past work and addresses social media’s impact on the younger generation, ethical decisions and implications in education, AI and machine-learning, and influential voices in the past millennium.

Click here to listen to this episode.

Photo by Gertrūda Valasevičiūtė on Unsplash

AI and Diplomacy: The implications for MI theory

by Howard Gardner and Shinri Furuzawa

The advent of increasingly competent—one could easily say “increasingly intelligent”—computer algorithms raises this question: Which roles and occupations that have long been the prerogative—one could even say, the “exclusive prerogative” —of human beings could be handled as well as, or perhaps better, by AI? ChatGPT is the current angst-inspiring algorithm, though it will certainly not be the only authoring program available; beyond question, it threatens the future of many educational pathways and many careers as we have come to know them.

In the previous blog “AI, Personal Intelligences, and Diplomacy,” Shinri Furuzawa specified the intelligences that are presumably entailed in the practice of diplomacy. When it comes to linguistic and/or logical mathematical intelligence, ChatGPT (and kindred programs) are increasingly similar to, and often better than, human beings. In contrast, several intelligences—spatial, musical, bodily-kinesthetic, naturalist—appear unnecessary for tackling diplomatic challenges. That leaves for consideration the intelligences concerned with personhood: interpersonal intelligence (understanding of others) and intrapersonal intelligence (understanding of self).

What of “emotional intelligence”?

Dr. Martin Luther King, Jr. Civil rights march, Washington DC, 1963

Those with a casual interest in these matters will immediately ask “What about emotional intelligence?” It’s fine to use that term if you prefer, but Howard distinguishes his concepts from those of Daniel Goleman and his associates. In a word, that’s because “emotional intelligence” conflates an understanding of the world of persons and knowing how best to use that skill for benevolent purposes. By that understanding, Martin Luther King or Florence Nightingale might appear no different than a scam artist. Howard prefers not to connect computation with a specific value system. Emotional intelligence can be used to ingratiate or to manipulate.

Interpersonal intelligence

Back to the realm of the personal intelligences—these refer to abilities without assuming or presuming how those intelligences will be used. Without question, interpersonal intelligence (understanding of other persons and how to deal effectively with them) is crucial in diplomacy—and, indeed, in any interaction with other persons. This is a skill which begins early in life and can clearly be enhanced through practice and training. Most neurotypical individuals have little trouble in picking up cues about the emotions and perhaps even the motivations of those with whom they are in regular contact. In contrast, individuals who are on the autistic spectrum are defined as having difficulty in this form of understanding. That does not mean, however, that they are incapable of picking up such cues—they just need to do it in other ways.

How to train interpersonal intelligence

In his book Life Animated, journalist Ron Suskind describes how his son Owen who has ASD, learned social connection through the medium of Disney movies. Owen had memorized and could reenact entire scenes from these movies, using them to interpret emotions, behavior, and moral lessons which could be transferred to human interaction. Owen was able to train his personal intelligences by seeing appropriate emotional responses modeled in the movies. The movie dialogues provided the words and phrases to express the emotions which may also have helped him develop linguistic intelligence. This may provide the model for AI to develop personal intelligences.

At the other end of the spectrum, there are rare individuals who can pick up and remember the most minute details in the faces, bodily posture, and tone of voice of other persons. Skilled theater actors might be one example, skilled politicians another. In the case of actors, they not only observe acutely but can also mimic or impersonate. By careful study of such talented individuals, we may learn about the personal intelligences—how these intelligences are used and developed. This knowledge and understanding in turn can be drawn on by computer scientists or algorithm developers—for positive or negative purposes.

May 2002: Vladimir Putin presents George W. Bush with a letter from Catherine the Great to George III in which she denies his request to send Cossacks to aid British forces in the American Revolution (Source: US Dept. of State Archive)

In short, the better we understand how human beings handle cues from others, especially in face-to-face interactions, the more likely it is that we can program algorithms to do the same. If, for example, Vladimir Putin could understand how to gain insights into the best ways to negotiate with political leaders across the political spectrum, a contemporary Russian version of ChatGPT could be trained to gain the same insights.

At the same time, if each encounter could produce an update, so could a computational diplomat—what we call learning from experience. As an example, George W, Bush might certainly update his initial evaluation of Putin since the time when Bush commented,

"I looked the man in the eye. I found him to be very straightforward and trustworthy. We had a very good dialogue. I was able to get a sense of his soul.”

Intrapersonal intelligence

The issue of intrapersonal intelligence proves far more vexed. In modern Western society, we generally value an individual’s insights into his/her personality. And it is just possible that different computer algorithms can also come to impersonate different reflective capacities.

Psychotherapists are trained (and their profession was initiated) to help individual patients understand themselves better, thereby enhancing their intrapersonal intelligence. Indeed, such self-understanding has been a major goal of most forms of psychotherapy—to increase self- knowledge. I can give a personal example. When, at a time of difficulty, Howard saw a psychoanalyst periodically over a few months, he suggested that consider having a full-scale psychoanalysis. Howard asked, “Will I be any happier?” The therapist replied, “Not necessarily, but you’ll understand yourself better.” A terse definition of intrapersonal intelligence.

Only a Western ideal?

Consider that knowledge of oneself may be a Western ideal, one that began in classical times and was rejuvenated in the modern era, which one can date anytime from 1550 onward.

Evidence, admittedly controversial, comes from the writings of psychologist Julian Jaynes. Jaynes dates interest in and insights into one’s own personhood to the Greek era. In fact, he dates the origins to the works attributed to the oral bard, Homer. In the Iliad, characters are inevitably types—warriors, heroes, villains, protectors—one gets no sense of Achilles or Agamemnon as distinct personalities. In intriguing contrast, in the Odyssey (presumably inscribed a few centuries later) we get insights into Ulysses as a specific person, a distinct personality. And as we consider individuals from the classical era—ranging from Socrates to Marcus Aurelius—we get clear senses of their own personalities, and, if we allow ourselves to squint a bit, their understandings of themselves as individuals.

We should not go overboard—at least no more overboard than we have already ventured! Yet, many cultural anthropologists would agree that a focus on the self qua does not characterize many traditional societies. Even today, Japan is much less of a psychological, and much more of a sociological, society than most other modern nations. And as our recent study of colleges underscores (link here), American college students are much more concerned with “I” than with “we.” Lest one dismiss the students as still developing, their parents, alumni, and trustees show even more of a concern with “I” than with “we.”

Intrapersonal intelligence in diplomats

George Kennan (1904-2005) American diplomat and historian

We have wandered quite far away from the toolkit of the diplomat. And in fact, for certain diplomats under certain situations, an understanding of self may be an important asset. Though, we would add, that the understanding need not—and perhaps should not—be particularly deep. Howard has written previously (link here) about the overly introspective nature of George Kennan, an American diplomat and later historian.

Going out on a limb, we suggest that heightened intrapersonal intelligence is not an important requirement for a diplomat—whether animate or mechanical. Robert Blackwill suggested in his list of ideal qualities for successful diplomats (link here), that they should have an understanding of their own ideology and values, and their level of tolerance for policies which do not align with their own beliefs. Some foreign policy job offers might seem flattering or enticing, but if the offering institution’s ideology is not compatible, then diplomats have to know themselves well enough to be aware that accepting such positions would mean a professional life full of “pain and torment.” Though such self-knowledge would be useful, we doubt that this should be high on a list of essential skills.

Ronald Reagan—US President 1981-9

To use an example from recent history, Ronald Reagan might have been well served if he had known when he was having a bad day or was suffering cognitive decline—or when he should have consulted with his wife, Nancy, or his Chief of Staff, James Baker. However, it was hardly necessary for Reagan to have insights into how his parents affected him, or even what kind of a parent he was to his own five children. As Lou Cannon, his excellent biographer, has expressed it, Reagan’s strength was not in logical-mathematical intelligence—it was in storytelling. I would add that Reagan had a good sense of which stories to tell to which audiences, and that reflected heightened interpersonal intelligence.

In sharp contrast to Ronald Reagan, who we suggest had relatively little insight into himself—he did not know or care about the depth or breadth of his psyche, Barack Obama had considerable insight into himself, as befits a 21st century intellectual (see his memoir, Dreams From my Father). And yet, while critical of Reagan and admiring Obama, we would hesitate to rank order their diplomatic skills. In fact, Reagan may have been more successful in negotiating with the Soviet Union than Obama was with China. Going further out on a limb, we wonder to what extent intrapersonal intelligence has been as important in human history as the other forms of intelligence.

Stepping back, we may tentatively conclude that, in addition to linguistic and logical intelligences, a computer-as-diplomat needs to possess, or develop, a powerful sense of the individuals or groups with which it is negotiating. But sense of self—whatever that might mean to AI—can be saved for another day, or another world.

 

References

Cannon, L. (2000). President Reagan: The Role of a Lifetime. Public Affairs.

Jaynes, J. (2000). The origin of consciousness in the breakdown of the bicameral mind. Mariner books.

Suskind, R. (2016). Life animated: Sidekicks, Heroes and autisms. Kingsolver.

Could AI Replace Diplomats?

By Shinri Furuzawa

New computer algorithms are developing personal intelligences and are capable of outperforming us at games requiring skills once thought to be specific to humans. For decades, computers have surpassed us at games which are primarily logical, syntactic, or mathematical, such as chess, or Go.

Now, however, a recent article in Science describes the Cicero algorithm which can win against humans at the board game, Diplomacy. Enjoyed by the likes of John F. Kennedy and Henry Kissinger, this is a game which requires intuition, persuasion, and deception. Cicero is able to discuss strategy, forge alliances, and carry out subterfuge and betrayal. It mimics natural human language in text conversations that entail negotiation with other players. The ability to observe and evaluate the trustworthiness of other players while convincing others of one’s own trustworthiness, and dealing with imperfect information, are key skills for actual human diplomats.

All things considered, one wonders how close AI could come to replicating the skills of a real diplomat and whether one day, AI could even replace human diplomats.

What Makes a Good Diplomat?

Former high-level American diplomat Robert Blackwill, suggested fifteen qualities which he thought essential for diplomats. Perhaps a third of these characteristics are inherent, and therefore irrelevant to AI, such as resilience to failure, or honesty. In other areas, such as analytical skills, attention to detail, or knowledge of history, AI already surpasses humans.

AI would, however, struggle in any area involving interactions that occur in person when the personal intelligences are especially vital. Diplomats must accurately collaborate, observe and evaluate others, and understand other people’s motivations while taking into account cultural, political, organizational and other differences. These trained professionals form mental models of their antagonists, and update them even unconsciously.

Diplomats are also skillful in interpreting non-verbal cues such as facial expressions, eye movement, and body posture. For decades, it has been common for diplomats to receive specific instruction on these interpersonal skills. While AI has made advances in interpreting non-verbal cues and information, it’s not quite there.

  • Facial and emotional recognition: AI is already being used to recognize faces and monitor people’s facial expressions, for example, in airport security systems. The problem for affect detection algorithms arises, however, with the fact that facial expressions of emotion are not universal; the way in which people communicate their emotions can vary according to culture or the situation. AI also performs better at recognizing Caucasians over people of color, a further problem that may lead to racial profiling.

    If AI can’t yet read us well by looking at our faces, it does better at listening to our voices.

  • Voice analysis: AI already has voice recognition and realistic voice generation. It can now also be used to detect patterns and characteristics in the voice that cannot be picked up by the human ear. Algorithms can predict psychiatric illness and other health conditions. By analyzing recordings of Vladimir Putin’s voice in February and March of 2022 during the ongoing war in Ukraine and comparing them to a recording of a talk he gave in September 2020, AI was able to detect stress levels 40% above baseline. While AI can collect such data, it must still be interpreted by humans and cannot—or at least should not—be used to predict human behavior. 

    AI capabilities may still be nascent in some areas, but they will only improve in the future.

Could AI RENDER Human Diplomats OBSOLETE?

Diplomacy may involve skills that we have long considered to be quintessentially human. I talked to Steven Siqueira, a former Canadian diplomat and chief of staff for several UN peace operations, and to Dr. Martin Waehlisch who leads the Innovation Cell in the Policy and Mediation Division of the UN Department of Political and Peacebuilding Affairs. I asked them what a diplomat does that an AI could never do. It seems to me, that it comes down to interpersonal intelligence.

Steven Siqueira - former Canadian diplomat and chief of staff for several UN peace operations

Martin Waehlisch - leads the Innovation Cell in the Policy and Mediation Division of the UN Department of Political and Peacebuilding Affairs

  • Developing personal relationships: In Siqueira’s view, “You need personal relationships to get things done.”

    He gave the example of when he was tasked with establishing a UN mission in Sudan. Siqueira negotiated with a high number of separate stakeholders, which meant cultivating a myriad of different relationships. AI may be able to form analyses and identify requirements, but actual implementation is a human task. It would be extremely hard for AI to navigate the interface between personalities, and the intricacies behind each stakeholder’s position: their limitations and accountability whether it be to politicians, the military, civil society, or the media, all while working together towards a mutually satisfactory outcome.

Political scientist, Joseph Nye, would agree on the value of human relationships. Nye describes the importance of “soft power” as opposed to traditional “hard power” which relies on military or economic strength. He suggests that agreements and alliances today are fostered more through amicable relations, using tact and warmth, rather than aggressive tactics. According to Nye, even a smile can be a soft power resource. Diplomatic efforts need to be directed at citizens, not just governments, shifting to influence through likeability, attraction, and relationship rather than power—or at least in addition to—force, or coercion. As Waehlisch says, “The future is about soft skills… I was skeptical of emotional intelligence but I’m more and more convinced.”

  • Innovative thinking: AI’s ability to think creatively and adapt to circumstances is also questionable. AI cannot respond in innovative ways if it is only drawing from the past. In the Diplomacy game, the chatbot is not creating anything new, it’s regurgitating based on percentages of success rates in past games.

    In the real world, diplomats think on their feet and rely on their training and experience to deal with new situations. This aligns with the last point on Blackwill’s list; diplomats must be quick to recognize opportune moments and know how to exploit fortuitous and unforeseen circumstances when they arise.

  • Experience: In diplomacy, experience is crucial. Diplomats are trained through mentoring and vital skills are learned on the job. Blackwill listed learning from experience as an essential skill for diplomats, and as he puts it,“Would you hire a plumber who was academically well-versed in water distribution, but had never installed a pipe?”

What Role Does AI Have to Play in Diplomacy?

AI may fall short in personal intelligences, but it fares significantly better in linguistic and logical-mathematical intelligences. Siqueira and Waehlisch provided some insights into how AI is being used in diplomacy now, and how it could be used in the future.

  • Generating text: Blackwill’s list of essential skills includes the ability to write and speak well, or linguistic intelligence. The latest reports on Open AI’s ChatGPT-3 attest to AI’s ability to converse convincingly with a human. It can engage in philosophical discussions, tell (bad) jokes, and debate political issues; it can also write and debug code, write college-level essays, and take tests successfully. Whether the task entails making an after-dinner speech or giving a presentation, AI can be programmed to tailor language, tone, style, format to match an audience. Many of the more mundane report-writing tasks performed by interns today could be carried out by AI. Diplomats will no doubt increasingly rely on AI for research.

  • Mediation: AI could be used to support mediation. At the UN, for example, all mediated agreements are in a database. AI could easily draw upon the same language to mitigate similar situations that have occurred in the past. AI could scan and track different clauses—thereby providing valuable insights and perhaps helping to sustain peace efforts.

  • Advisory roles: Computers are able to process and instantly retrieve exponentially more information than humans, enabling them to take over traditional advisory roles. Diplomats on the UN Security Council use their smartphones to find information or receive instructions rather than relying on advisors to whisper in their ears. Computer programs and algorithms are superior at assessing data and anticipating outcomes—important skills in negotiation.

  • Targeting resources: At the UN, AI capacities in the form of data aggregators are already being used to analyze the press releases and communiques of all foreign ministries, allowing political officers to “mine the sentiment” on a given topic. Knowing which countries are most concerned about an issue enables targeted approaches—for example, by knowing which countries may be open to providing donor resources.

    Geospatial technology has recently made significant advances in providing “eyes in the sky.” These capacities entail data collection and analysis in fragile states which can improve monitoring and allow targeted humanitarian or peacekeeping efforts. It’s important to remember, however, that early warning doesn’t mean early action–political decision-making must still be done by people. Technology can’t fill this gap.

  • Increased productivity: AI undoubtedly improves productivity. It offers internal solutions by tackling intrinsic systemic challenges with products aimed at automation and speed. External solutions enable closer human connections which results in inclusivity.

  • Access: AI also enables dialogue. Many groups that once could not have been part of the negotiation process due to geographical remoteness, or that were simply not allowed at the table, can now be party to the conversation. Increased opportunities in terms of language and translation capabilities through TV and radio mining enable access to low-resource languages. Such outreach outflanks cultural and language barriers. 

  • Training intrapersonal intelligence: New advances in virtual reality (VR) can be used to develop a diplomat’s intrapersonal intelligence. Such technology allows active “body swapping,” so people can “walk each other’s journeys.” Built-in behavioral science experiments may well detect implicit biases and identify cognitive challenges. VR provides a safe space for diplomats to learn about themselves, discover their biases, and better understand their interactions with others. Put differently, it fosters perspective-taking and helps overcome dehumanization. As Waehlisch suggested, “What if Netanyahu went through an Israeli checkpoint as a Palestinian?” In VR, he would see how people looked at him, the weapons pointed at him, and feel the danger, to perhaps reveal a new perspective.

Dangers of AI in Diplomacy

There are some things that can never be left to AI.

  • Decision making: Delegating decision-making to AI would be a mistake, even though in some ways it could be seen as desirable.

    It is conceivable that AI could be programmed to make more rational, fair, and evidence-based decisions than humans. After all, AI is not vulnerable to human emotions or weaknesses. For centuries, the ideal diplomat was like a robot, coldly efficient, rational, and devoid of emotion, as codified in diplomatic protocols. Indeed, diplomats are routinely rotated every few years to prevent emotional attachments. In contrast, AI has no problem remaining detached and calm in stressful situations. Without emotions or physical sensations, AI could not be threatened or made to feel vulnerable in the same way as a human, for example, as when Vladimir Putin used his dog to intimidate Angela Merkel—famously terrified of dogs. AI would not be motivated by personal gain, or be tempted to abuse its authority and would be untroubled by the personal cost of resisting political pressure and standing by diplomatic policy decisions (in a nation’s interest) even if unpopular. AI would not be susceptible to exhaustion or lapses in judgment. In fact, a survey conducted by the Center for the Governance of Change at IE University in Spain, one in four Europeans indicated that they would prefer policy decisions be made by AI rather than politicians. However, in decision-making complete rationality is not always best.

Take as an example, the “Prisoner’s Dilemma” from game theory. Even though mutual cooperation would yield a greater net reward, the only possible outcome for two purely rational prisoners is betrayal. And of course, in real life, this stance could quickly lead to escalated military action, or nuclear war and mutual destruction. Even if we set parameters beforehand, these may be incomplete or fail. Would AI have the ability to pull back? If an algorithm were tasked with bringing about world peace, an efficient move might be to eradicate all humans from the planet.

Yejin Choi, a computer scientist and 2022 recipient of the MacArthur “Genius grant,” makes the same point from an ethical standpoint. In one interview, she said that in the most fundamental ways, “AI struggles with basic common sense.”

While humans understand many things, such as common exceptions to rules, AI must be specifically taught, or be at risk of choosing extreme or damaging solutions that humans would never consider. Choi argues the challenge will be to account for value pluralism, to teach AI that values can be broad and that diverse viewpoints need to be taken into account. Ethical guidelines are necessary but there is no one moral framework that can be imposed. The implications for diplomacy are dangerous. While AI will continue to improve, Choi doubts that humans will ever create sentient artificial intelligence, or AI with true intrapersonal intelligence.

  • Malevolence: There is potential for AI technology to be used maliciously. We need to work on ways to forecast and mitigate such threats. The threat of AI is easy to see in what has been described as today’s “post-truth era.” AI is being used for negative messaging which leads to greater polarization, destabilization of existing frameworks, and the influencing of elections.

  • Bias: There is also the problem of bias in AI systems. While often seen as a technical problem, most AI bias stems from human biases and systemic, institutional biases. For machine learning models to work well, a very large and diverse, and robust set of data involving all ages, genders, ethnicities, and other demographic criteria must be used. In the history of Western diplomacy, key decisions have been made by mostly men of a certain profile which could certainly skew the dataset. Regulations and safeguards are of course necessary. Excessive concentration in AI space and in a handful of technology companies must also be avoided through regulation—for example, encouraging competition and not allowing monopolization.

“The Greatest Threat and the Greatest Opportunity”

French Ambassador, David Cvach, said in a 2018 Tedx talk that AI is both the greatest threat and the greatest opportunity for diplomacy. There is truth to this assertion.

Sophia robot

In the field of international relations and diplomacy, AI is touted more often as a threat, for example, in terms of autonomous weapons. Though AI may have (often unintentional) negative consequences, organizations such as AI For Peace have a different stance: on their account, dialogue between academia, industry and civil society can help ensure the benefits of AI while minimizing the risks. Waehlisch has suggested machine learning and natural language processing can be used to promote peace. His chief concern is how to use new technologies to help de-escalate violence and increase international stability.

I would argue that while AI will augment the work of human diplomats making them more efficient and effective, it will never be more than a useful tool in diplomacy. Indeed, it could not and should not replace human diplomats. AI might outperform humans at most analytical tasks, but humans will still surpass AI at more subtle, “feeling tasks.” Even as algorithms come closer to replicating human interpersonal intelligence, direct person-to-person interaction is probably the most important method of increasing or maintaining “soft power” in diplomacy. Chatbots may be able to fool humans at the Diplomacy game online, but robots such as Sophia (appointed in 2017 as the UN Development Program’s first Innovation Champion), could not yet be mistaken for human.

On the positive side, the opportunities of AI lie in creating a more level playing field, as long as technology is not limited to wealthy countries. The ability of diverse stakeholders to use algorithms could provide more holistic and comprehensive solutions to today’s challenges, such as forced migration or unanticipated pandemics. Perhaps AI can be a means for engaging and uniting people around the world on issues of mutual interest for a more peaceful and sustainable future. We should use all our multiple intelligences, and the possibilities of artificial intelligence, to achieve this end.

In our next blog post, Howard Gardner will discuss the implications of AI in understanding human personal intelligences.

I would like to thank Howard Gardner for his valuable input into this post. I am also grateful to Steven Siqueira and Martin Waehlisch for very kindly agreeing to interviews and sharing their thoughts.