AI

AI & MI: Artificial Intelligence and Multiple Intelligences

By Tom Hoerr

“What are multiple intelligences?” I asked ChatGPT, the new artificial intelligence (AI) technology.

Within a few seconds it said:

“Multiple intelligences are a theory of intelligence developed by Harvard psychologist Howard Gardner in 1983. It suggests that there are eight different types of intelligence: linguistic, logical-mathematical, spatial, bodily-kinesthetic, musical, interpersonal, intrapersonal, and naturalistic. Each type of intelligence is associated with a different way of processing information and solving problems.”

Wow, yes, within a few seconds.

And why is this relevant to MI?

MI is all about problem-solving. Howard Gardner defined intelligence as the ability to solve a problem or create a product that is valued in a culture, and each of the multiple eight intelligences encompasses a different set of skills and talents.

 ChatGPT, a technology that can follow instructions in a conversational prompt and provide a detailed response (as it did to my question), is also about solving problems. The concept of MI was revolutionary when Frames Of Mind was published in 1983. Today, the potential of AI captures our mind and activates our imagination. Typing ChatGPT in Google yields 855,000,000 links in .065 seconds.

How do MI and AI interact? Or do they?

First, let’s be clear, we already routinely deal with some AI-MI which we likely take for granted. Some salient MI-AI examples:

  • Linguistic: AI automated answering services. “Is that a real person?”

  • Logical-mathematical: AI chess or other game software. “I was check-mated again!”

  • Musical: AI playlists which emanate from our previous choices. “I don’t need to seek the songs I enjoy.”

  • Spatial: AI GPS directions. “Remember the AAA Triptik?!??

  • Bodily-kinesthetic: AI autopilot is coming. Check out “Robotic Limbs Moved By the Mind” in this 60 Minutes feature (click here for link).

Of course, just as performing a complex task requires using many intelligences, sophisticated AI acts require an amalgam of MI. For example, when buying something online (something I do far more than I should), the website features linguistic and spatial information about the products. Then, after I have purchased, I get a message that says, "Since you purchased X, you might be interested in Y and Z." The employees used their interpersonal and intrapersonal intelligences to conceptualize what a buyer of X might now want. Then they used their logical-mathematical intelligences to program the site.

But this is old news. As noted above, I did a small trial and asked ChatGPT “What are multiple intelligences?” The accurate response was based on readily accessible information and was generated within a few seconds. Then I posed a bit more complex question, “How can multiple intelligences be used?” Again, the lightning-fast response was accurate, although a bit superficial:

“1. Use visual aids such as diagrams, charts, and pictures to help students understand concepts. 2. Incorporate music and rhythm into lessons to help students remember information. 3. Use role-playing and simulations to help students understand abstract concepts. 4. Incorporate movement into lessons to help students stay engaged and learn.” And so on for each MI.

 I decided to be a bit contrarian and asked, “What about the opposition to MI?” ChatGPT said:

“The opposition to multiple intelligences theory is largely due to the fact that it challenges the traditional view of intelligence as a single, unitary construct. Critics argue that the theory is too vague and lacks empirical evidence to support its claims. Additionally, some argue that the theory is too broad and that it fails to account for individual differences in intelligence. Finally, some argue that the theory is too focused on the individual and fails to consider the social and cultural contexts in which intelligence is developed."

That response shows a bit more sophistication – particularly the last sentence – but simply regurgitates criticisms that have already been stated. The speed of the response, again within a few seconds, is very impressive.

To elicit a response that isn’t already lodged in libraries, I asked, “Can happiness be smelled?” The ChatGPT response:

“No, happiness is not something that can be smelled.” That’s true but I would expect a human with a bit of sophistication to continue, perhaps with, “No, happiness is not something that can be smelled but we surely associate some scents with happy feelings – freshly baked bread comes to mind as do popcorn and cookies.”

Of course, there’s an entire perfume industry.

The ChatGPT technology possesses some intrapersonal intelligence. When asked about its limitations, it responded:

“ChatGPT is not capable of solving problems that require complex reasoning or understanding of abstract concepts. It is also not capable of providing advice or making decisions for users. Additionally, ChatGPT cannot provide accurate answers to questions that require a deep understanding of a particular subject or domain.”

 Relatedly, very relatedly, I asked if ChatGPT could ascertain emotion. It replied that it cannot and said,

“ChatGPT is a natural language processing (NLP) system that is designed to generate responses to user input. It does not have the capability to detect or interpret emotion.”

Technology continues to advance at an astonishing rate (remember floppy computer disks?), so presumably the superficial response capacity of AI will increase so that answers will become more intricate. Imagine 100 Jeopardy champions crammed in a brain, spouting out facts. That’s impressive!

A Jeopardy game, however, is not the real world. The mastery of information and facts that it values ignores the nuances of life and complexity of human relationships. Nowhere are the categories of interpersonal or intrapersonal intelligences found on a Jeopardy board and ChatGPT shares that limitation.

Artificial intelligences using multiple intelligences (AI-MI) to solve problems will continue and, indeed, exacerbate. But I’m dubious about the ability of AI to read and understand people’s emotions; I’m doubtful that AI will be able to feel and generate empathy. I cannot imagine AI effectively serving as the personal intelligences.

At the New City School, we believed that each of the intelligences is important because they tapped into different human capacities. We infused MI throughout our curriculum and pedagogy to give students more pathways to learn. But we also said that we believed that the interpersonal and intrapersonal intelligences are the most important. Today I capture this by saying

“Who you are is more important than what you know.”

That is not to denigrate scholastics; kids need to learn to read, write, and calculate. It is to say that the personal intelligences – the ability to work with others and to know yourself – are the strongest contributors to success and happiness. I saw this in the progress of our students.

When we think about how our students must be prepared to succeed in the future – recognizing that we must prepare them for success in life, not to just do well in school –the importance of the personal intelligences, SEL in today’s nomenclature, is very clear. ChatGPT and its successors will dominate the routine. Our students still need to learn the 3 R’s, but we cannot stop there. We must teach them how to solve complex problems, including working with, learning from, and appreciating others.

What’s your take on AI-MI? Does the increasing pervasiveness of worry you? I would love to hear from you.

Tom (a real person)

Republished with permission from the newsletter: Intelligences Connections Volume 32, Number 5 (click here to subscribe).

Rise and Fall of IQ

By Howard Gardner

A recent article (link here) by John Anderer asks “Are we growing more dumber?” [sic]. Many readers will have seen similar headlines. The fact that people can drop on some dimensions but not on others shows that IQ is not a single monolith. The finding should not surprise anyone who is sympathetic to “MI perspectives.” After all, there is no reason to think that when one measure of intelligence goes up—or goes down—the others will necessarily move in the same direction as well.

As this article points out, around the world IQ has been rising steadily over recent decades, especially on spatial measures—presumably because our lives are enmeshed is various kinds of visual and technological entities, most of which call on spatial capacities. As we live in a world that is increasingly enmeshed in “artificial intelligence,” devices, and algorithms, we can expect similar shifts in profiles of intelligence.

What will algorithms like ChatGPT do to our personal intelligences? For now, this remains a topic for speculation—if not science fiction—but for how long is difficult to assess. We may decide to attribute personal knowledge to algorithms; we may decide to deny them that form of knowledge; or the algorithms may make their own decision!

Photo by ALAN DE LA CRUZ on Unsplash

AI and Diplomacy: The implications for MI theory

by Howard Gardner and Shinri Furuzawa

The advent of increasingly competent—one could easily say “increasingly intelligent”—computer algorithms raises this question: Which roles and occupations that have long been the prerogative—one could even say, the “exclusive prerogative” —of human beings could be handled as well as, or perhaps better, by AI? ChatGPT is the current angst-inspiring algorithm, though it will certainly not be the only authoring program available; beyond question, it threatens the future of many educational pathways and many careers as we have come to know them.

In the previous blog “AI, Personal Intelligences, and Diplomacy,” Shinri Furuzawa specified the intelligences that are presumably entailed in the practice of diplomacy. When it comes to linguistic and/or logical mathematical intelligence, ChatGPT (and kindred programs) are increasingly similar to, and often better than, human beings. In contrast, several intelligences—spatial, musical, bodily-kinesthetic, naturalist—appear unnecessary for tackling diplomatic challenges. That leaves for consideration the intelligences concerned with personhood: interpersonal intelligence (understanding of others) and intrapersonal intelligence (understanding of self).

What of “emotional intelligence”?

Dr. Martin Luther King, Jr. Civil rights march, Washington DC, 1963

Those with a casual interest in these matters will immediately ask “What about emotional intelligence?” It’s fine to use that term if you prefer, but Howard distinguishes his concepts from those of Daniel Goleman and his associates. In a word, that’s because “emotional intelligence” conflates an understanding of the world of persons and knowing how best to use that skill for benevolent purposes. By that understanding, Martin Luther King or Florence Nightingale might appear no different than a scam artist. Howard prefers not to connect computation with a specific value system. Emotional intelligence can be used to ingratiate or to manipulate.

Interpersonal intelligence

Back to the realm of the personal intelligences—these refer to abilities without assuming or presuming how those intelligences will be used. Without question, interpersonal intelligence (understanding of other persons and how to deal effectively with them) is crucial in diplomacy—and, indeed, in any interaction with other persons. This is a skill which begins early in life and can clearly be enhanced through practice and training. Most neurotypical individuals have little trouble in picking up cues about the emotions and perhaps even the motivations of those with whom they are in regular contact. In contrast, individuals who are on the autistic spectrum are defined as having difficulty in this form of understanding. That does not mean, however, that they are incapable of picking up such cues—they just need to do it in other ways.

How to train interpersonal intelligence

In his book Life Animated, journalist Ron Suskind describes how his son Owen who has ASD, learned social connection through the medium of Disney movies. Owen had memorized and could reenact entire scenes from these movies, using them to interpret emotions, behavior, and moral lessons which could be transferred to human interaction. Owen was able to train his personal intelligences by seeing appropriate emotional responses modeled in the movies. The movie dialogues provided the words and phrases to express the emotions which may also have helped him develop linguistic intelligence. This may provide the model for AI to develop personal intelligences.

At the other end of the spectrum, there are rare individuals who can pick up and remember the most minute details in the faces, bodily posture, and tone of voice of other persons. Skilled theater actors might be one example, skilled politicians another. In the case of actors, they not only observe acutely but can also mimic or impersonate. By careful study of such talented individuals, we may learn about the personal intelligences—how these intelligences are used and developed. This knowledge and understanding in turn can be drawn on by computer scientists or algorithm developers—for positive or negative purposes.

May 2002: Vladimir Putin presents George W. Bush with a letter from Catherine the Great to George III in which she denies his request to send Cossacks to aid British forces in the American Revolution (Source: US Dept. of State Archive)

In short, the better we understand how human beings handle cues from others, especially in face-to-face interactions, the more likely it is that we can program algorithms to do the same. If, for example, Vladimir Putin could understand how to gain insights into the best ways to negotiate with political leaders across the political spectrum, a contemporary Russian version of ChatGPT could be trained to gain the same insights.

At the same time, if each encounter could produce an update, so could a computational diplomat—what we call learning from experience. As an example, George W, Bush might certainly update his initial evaluation of Putin since the time when Bush commented,

"I looked the man in the eye. I found him to be very straightforward and trustworthy. We had a very good dialogue. I was able to get a sense of his soul.”

Intrapersonal intelligence

The issue of intrapersonal intelligence proves far more vexed. In modern Western society, we generally value an individual’s insights into his/her personality. And it is just possible that different computer algorithms can also come to impersonate different reflective capacities.

Psychotherapists are trained (and their profession was initiated) to help individual patients understand themselves better, thereby enhancing their intrapersonal intelligence. Indeed, such self-understanding has been a major goal of most forms of psychotherapy—to increase self- knowledge. I can give a personal example. When, at a time of difficulty, Howard saw a psychoanalyst periodically over a few months, he suggested that consider having a full-scale psychoanalysis. Howard asked, “Will I be any happier?” The therapist replied, “Not necessarily, but you’ll understand yourself better.” A terse definition of intrapersonal intelligence.

Only a Western ideal?

Consider that knowledge of oneself may be a Western ideal, one that began in classical times and was rejuvenated in the modern era, which one can date anytime from 1550 onward.

Evidence, admittedly controversial, comes from the writings of psychologist Julian Jaynes. Jaynes dates interest in and insights into one’s own personhood to the Greek era. In fact, he dates the origins to the works attributed to the oral bard, Homer. In the Iliad, characters are inevitably types—warriors, heroes, villains, protectors—one gets no sense of Achilles or Agamemnon as distinct personalities. In intriguing contrast, in the Odyssey (presumably inscribed a few centuries later) we get insights into Ulysses as a specific person, a distinct personality. And as we consider individuals from the classical era—ranging from Socrates to Marcus Aurelius—we get clear senses of their own personalities, and, if we allow ourselves to squint a bit, their understandings of themselves as individuals.

We should not go overboard—at least no more overboard than we have already ventured! Yet, many cultural anthropologists would agree that a focus on the self qua does not characterize many traditional societies. Even today, Japan is much less of a psychological, and much more of a sociological, society than most other modern nations. And as our recent study of colleges underscores (link here), American college students are much more concerned with “I” than with “we.” Lest one dismiss the students as still developing, their parents, alumni, and trustees show even more of a concern with “I” than with “we.”

Intrapersonal intelligence in diplomats

George Kennan (1904-2005) American diplomat and historian

We have wandered quite far away from the toolkit of the diplomat. And in fact, for certain diplomats under certain situations, an understanding of self may be an important asset. Though, we would add, that the understanding need not—and perhaps should not—be particularly deep. Howard has written previously (link here) about the overly introspective nature of George Kennan, an American diplomat and later historian.

Going out on a limb, we suggest that heightened intrapersonal intelligence is not an important requirement for a diplomat—whether animate or mechanical. Robert Blackwill suggested in his list of ideal qualities for successful diplomats (link here), that they should have an understanding of their own ideology and values, and their level of tolerance for policies which do not align with their own beliefs. Some foreign policy job offers might seem flattering or enticing, but if the offering institution’s ideology is not compatible, then diplomats have to know themselves well enough to be aware that accepting such positions would mean a professional life full of “pain and torment.” Though such self-knowledge would be useful, we doubt that this should be high on a list of essential skills.

Ronald Reagan—US President 1981-9

To use an example from recent history, Ronald Reagan might have been well served if he had known when he was having a bad day or was suffering cognitive decline—or when he should have consulted with his wife, Nancy, or his Chief of Staff, James Baker. However, it was hardly necessary for Reagan to have insights into how his parents affected him, or even what kind of a parent he was to his own five children. As Lou Cannon, his excellent biographer, has expressed it, Reagan’s strength was not in logical-mathematical intelligence—it was in storytelling. I would add that Reagan had a good sense of which stories to tell to which audiences, and that reflected heightened interpersonal intelligence.

In sharp contrast to Ronald Reagan, who we suggest had relatively little insight into himself—he did not know or care about the depth or breadth of his psyche, Barack Obama had considerable insight into himself, as befits a 21st century intellectual (see his memoir, Dreams From my Father). And yet, while critical of Reagan and admiring Obama, we would hesitate to rank order their diplomatic skills. In fact, Reagan may have been more successful in negotiating with the Soviet Union than Obama was with China. Going further out on a limb, we wonder to what extent intrapersonal intelligence has been as important in human history as the other forms of intelligence.

Stepping back, we may tentatively conclude that, in addition to linguistic and logical intelligences, a computer-as-diplomat needs to possess, or develop, a powerful sense of the individuals or groups with which it is negotiating. But sense of self—whatever that might mean to AI—can be saved for another day, or another world.

 

References

Cannon, L. (2000). President Reagan: The Role of a Lifetime. Public Affairs.

Jaynes, J. (2000). The origin of consciousness in the breakdown of the bicameral mind. Mariner books.

Suskind, R. (2016). Life animated: Sidekicks, Heroes and autisms. Kingsolver.

AI, Personal Intelligences, and Diplomacy

By Shinri Furuzawa

New computer algorithms are developing personal intelligences and are capable of outperforming us at games requiring skills once thought to be specific to humans. For decades, computers have surpassed us at games which are primarily logical, syntactic, or mathematical, such as chess, or Go.

Now, however, a recent article in Science describes the Cicero algorithm which can win against humans at the board game, Diplomacy. Enjoyed by the likes of John F. Kennedy and Henry Kissinger, this is a game which requires intuition, persuasion, and deception. Cicero is able to discuss strategy, forge alliances, and carry out subterfuge and betrayal. It mimics natural human language in text conversations that entail negotiation with other players. The ability to observe and evaluate the trustworthiness of other players while convincing others of one’s own trustworthiness, and dealing with imperfect information, are key skills for actual human diplomats.

All things considered, one wonders how close AI could come to replicating the skills of a real diplomat and whether one day, AI could even replace human diplomats.

What Makes a Good Diplomat?

Former high-level American diplomat Robert Blackwill, suggested fifteen qualities which he thought essential for diplomats. Perhaps a third of these characteristics are inherent, and therefore irrelevant to AI, such as resilience to failure, or honesty. In other areas, such as analytical skills, attention to detail, or knowledge of history, AI already surpasses humans.

AI would, however, struggle in any area involving interactions that occur in person when the personal intelligences are especially vital. Diplomats must accurately collaborate, observe and evaluate others, and understand other people’s motivations while taking into account cultural, political, organizational and other differences. These trained professionals form mental models of their antagonists, and update them even unconsciously.

Diplomats are also skillful in interpreting non-verbal cues such as facial expressions, eye movement, and body posture. For decades, it has been common for diplomats to receive specific instruction on these interpersonal skills. While AI has made advances in interpreting non-verbal cues and information, it’s not quite there.

  • Facial and emotional recognition: AI is already being used to recognize faces and monitor people’s facial expressions, for example, in airport security systems. The problem for affect detection algorithms arises, however, with the fact that facial expressions of emotion are not universal; the way in which people communicate their emotions can vary according to culture or the situation. AI also performs better at recognizing Caucasians over people of color, a further problem that may lead to racial profiling.

    If AI can’t yet read us well by looking at our faces, it does better at listening to our voices.

  • Voice analysis: AI already has voice recognition and realistic voice generation. It can now also be used to detect patterns and characteristics in the voice that cannot be picked up by the human ear. Algorithms can predict psychiatric illness and other health conditions. By analyzing recordings of Vladimir Putin’s voice in February and March of 2022 during the ongoing war in Ukraine and comparing them to a recording of a talk he gave in September 2020, AI was able to detect stress levels 40% above baseline. While AI can collect such data, it must still be interpreted by humans and cannot—or at least should not—be used to predict human behavior. 

    AI capabilities may still be nascent in some areas, but they will only improve in the future.

Could AI Ever Replace Human Diplomats?

Diplomacy may involve skills that we have long considered to be quintessentially human. I talked to Steven Siqueira, a former Canadian diplomat and chief of staff for several UN peace operations, and to Dr. Martin Waehlisch who leads the Innovation Cell in the Policy and Mediation Division of the UN Department of Political and Peacebuilding Affairs. I asked them what a diplomat does that an AI could never do. It seems to me, that it comes down to interpersonal intelligence.

Steven Siqueira - former Canadian diplomat and chief of staff for several UN peace operations

Martin Waehlisch - leads the Innovation Cell in the Policy and Mediation Division of the UN Department of Political and Peacebuilding Affairs

  • Developing personal relationships: In Siqueira’s view, “You need personal relationships to get things done.”

    He gave the example of when he was tasked with establishing a UN mission in Sudan. Siqueira negotiated with a high number of separate stakeholders, which meant cultivating a myriad of different relationships. AI may be able to form analyses and identify requirements, but actual implementation is a human task. It would be extremely hard for AI to navigate the interface between personalities, and the intricacies behind each stakeholder’s position: their limitations and accountability whether it be to politicians, the military, civil society, or the media, all while working together towards a mutually satisfactory outcome.

Political scientist, Joseph Nye, would agree on the value of human relationships. Nye describes the importance of “soft power” as opposed to traditional “hard power” which relies on military or economic strength. He suggests that agreements and alliances today are fostered more through amicable relations, using tact and warmth, rather than aggressive tactics. According to Nye, even a smile can be a soft power resource. Diplomatic efforts need to be directed at citizens, not just governments, shifting to influence through likeability, attraction, and relationship rather than power—or at least in addition to—force, or coercion. As Waehlisch says, “The future is about soft skills… I was skeptical of emotional intelligence but I’m more and more convinced.”

  • Innovative thinking: AI’s ability to think creatively and adapt to circumstances is also questionable. AI cannot respond in innovative ways if it is only drawing from the past. In the Diplomacy game, the chatbot is not creating anything new, it’s regurgitating based on percentages of success rates in past games.

    In the real world, diplomats think on their feet and rely on their training and experience to deal with new situations. This aligns with the last point on Blackwill’s list; diplomats must be quick to recognize opportune moments and know how to exploit fortuitous and unforeseen circumstances when they arise.

  • Experience: In diplomacy, experience is crucial. Diplomats are trained through mentoring and vital skills are learned on the job. Blackwill listed learning from experience as an essential skill for diplomats, and as he puts it,“Would you hire a plumber who was academically well-versed in water distribution, but had never installed a pipe?”

What Role Does AI Have to Play in Diplomacy?

AI may fall short in personal intelligences, but it fares significantly better in linguistic and logical-mathematical intelligences. Siqueira and Waehlisch provided some insights into how AI is being used in diplomacy now, and how it could be used in the future.

  • Generating text: Blackwill’s list of essential skills includes the ability to write and speak well, or linguistic intelligence. The latest reports on Open AI’s ChatGPT-3 attest to AI’s ability to converse convincingly with a human. It can engage in philosophical discussions, tell (bad) jokes, and debate political issues; it can also write and debug code, write college-level essays, and take tests successfully. Whether the task entails making an after-dinner speech or giving a presentation, AI can be programmed to tailor language, tone, style, format to match an audience. Many of the more mundane report-writing tasks performed by interns today could be carried out by AI. Diplomats will no doubt increasingly rely on AI for research.

  • Mediation: AI could be used to support mediation. At the UN, for example, all mediated agreements are in a database. AI could easily draw upon the same language to mitigate similar situations that have occurred in the past. AI could scan and track different clauses—thereby providing valuable insights and perhaps helping to sustain peace efforts.

  • Advisory roles: Computers are able to process and instantly retrieve exponentially more information than humans, enabling them to take over traditional advisory roles. Diplomats on the UN Security Council use their smartphones to find information or receive instructions rather than relying on advisors to whisper in their ears. Computer programs and algorithms are superior at assessing data and anticipating outcomes—important skills in negotiation.

  • Targeting resources: At the UN, AI capacities in the form of data aggregators are already being used to analyze the press releases and communiques of all foreign ministries, allowing political officers to “mine the sentiment” on a given topic. Knowing which countries are most concerned about an issue enables targeted approaches—for example, by knowing which countries may be open to providing donor resources.

    Geospatial technology has recently made significant advances in providing “eyes in the sky.” These capacities entail data collection and analysis in fragile states which can improve monitoring and allow targeted humanitarian or peacekeeping efforts. It’s important to remember, however, that early warning doesn’t mean early action–political decision-making must still be done by people. Technology can’t fill this gap.

  • Increased productivity: AI undoubtedly improves productivity. It offers internal solutions by tackling intrinsic systemic challenges with products aimed at automation and speed. External solutions enable closer human connections which results in inclusivity.

  • Access: AI also enables dialogue. Many groups that once could not have been part of the negotiation process due to geographical remoteness, or that were simply not allowed at the table, can now be party to the conversation. Increased opportunities in terms of language and translation capabilities through TV and radio mining enable access to low-resource languages. Such outreach outflanks cultural and language barriers. 

  • Training intrapersonal intelligence: New advances in virtual reality (VR) can be used to develop a diplomat’s intrapersonal intelligence. Such technology allows active “body swapping,” so people can “walk each other’s journeys.” Built-in behavioral science experiments may well detect implicit biases and identify cognitive challenges. VR provides a safe space for diplomats to learn about themselves, discover their biases, and better understand their interactions with others. Put differently, it fosters perspective-taking and helps overcome dehumanization. As Waehlisch suggested, “What if Netanyahu went through an Israeli checkpoint as a Palestinian?” In VR, he would see how people looked at him, the weapons pointed at him, and feel the danger, to perhaps reveal a new perspective.

Dangers of AI in Diplomacy

There are some things that can never be left to AI.

  • Decision making: Delegating decision-making to AI would be a mistake, even though in some ways it could be seen as desirable.

    It is conceivable that AI could be programmed to make more rational, fair, and evidence-based decisions than humans. After all, AI is not vulnerable to human emotions or weaknesses. For centuries, the ideal diplomat was like a robot, coldly efficient, rational, and devoid of emotion, as codified in diplomatic protocols. Indeed, diplomats are routinely rotated every few years to prevent emotional attachments. In contrast, AI has no problem remaining detached and calm in stressful situations. Without emotions or physical sensations, AI could not be threatened or made to feel vulnerable in the same way as a human, for example, as when Vladimir Putin used his dog to intimidate Angela Merkel—famously terrified of dogs. AI would not be motivated by personal gain, or be tempted to abuse its authority and would be untroubled by the personal cost of resisting political pressure and standing by diplomatic policy decisions (in a nation’s interest) even if unpopular. AI would not be susceptible to exhaustion or lapses in judgment. In fact, a survey conducted by the Center for the Governance of Change at IE University in Spain, one in four Europeans indicated that they would prefer policy decisions be made by AI rather than politicians. However, in decision-making complete rationality is not always best.

Take as an example, the “Prisoner’s Dilemma” from game theory. Even though mutual cooperation would yield a greater net reward, the only possible outcome for two purely rational prisoners is betrayal. And of course, in real life, this stance could quickly lead to escalated military action, or nuclear war and mutual destruction. Even if we set parameters beforehand, these may be incomplete or fail. Would AI have the ability to pull back? If an algorithm were tasked with bringing about world peace, an efficient move might be to eradicate all humans from the planet.

Yejin Choi, a computer scientist and 2022 recipient of the MacArthur “Genius grant,” makes the same point from an ethical standpoint. In one interview, she said that in the most fundamental ways, “AI struggles with basic common sense.”

While humans understand many things, such as common exceptions to rules, AI must be specifically taught, or be at risk of choosing extreme or damaging solutions that humans would never consider. Choi argues the challenge will be to account for value pluralism, to teach AI that values can be broad and that diverse viewpoints need to be taken into account. Ethical guidelines are necessary but there is no one moral framework that can be imposed. The implications for diplomacy are dangerous. While AI will continue to improve, Choi doubts that humans will ever create sentient artificial intelligence, or AI with true intrapersonal intelligence.

  • Malevolence: There is potential for AI technology to be used maliciously. We need to work on ways to forecast and mitigate such threats. The threat of AI is easy to see in what has been described as today’s “post-truth era.” AI is being used for negative messaging which leads to greater polarization, destabilization of existing frameworks, and the influencing of elections.

  • Bias: There is also the problem of bias in AI systems. While often seen as a technical problem, most AI bias stems from human biases and systemic, institutional biases. For machine learning models to work well, a very large and diverse, and robust set of data involving all ages, genders, ethnicities, and other demographic criteria must be used. In the history of Western diplomacy, key decisions have been made by mostly men of a certain profile which could certainly skew the dataset. Regulations and safeguards are of course necessary. Excessive concentration in AI space and in a handful of technology companies must also be avoided through regulation—for example, encouraging competition and not allowing monopolization.

“The Greatest Threat and the Greatest Opportunity”

French Ambassador, David Cvach, said in a 2018 Tedx talk that AI is both the greatest threat and the greatest opportunity for diplomacy. There is truth to this assertion.

Sophia robot

In the field of international relations and diplomacy, AI is touted more often as a threat, for example, in terms of autonomous weapons. Though AI may have (often unintentional) negative consequences, organizations such as AI For Peace have a different stance: on their account, dialogue between academia, industry and civil society can help ensure the benefits of AI while minimizing the risks. Waehlisch has suggested machine learning and natural language processing can be used to promote peace. His chief concern is how to use new technologies to help de-escalate violence and increase international stability.

I would argue that while AI will augment the work of human diplomats making them more efficient and effective, it will never be more than a useful tool in diplomacy. Indeed, it could not and should not replace human diplomats. AI might outperform humans at most analytical tasks, but humans will still surpass AI at more subtle, “feeling tasks.” Even as algorithms come closer to replicating human interpersonal intelligence, direct person-to-person interaction is probably the most important method of increasing or maintaining “soft power” in diplomacy. Chatbots may be able to fool humans at the Diplomacy game online, but robots such as Sophia (appointed in 2017 as the UN Development Program’s first Innovation Champion), could not yet be mistaken for human.

On the positive side, the opportunities of AI lie in creating a more level playing field, as long as technology is not limited to wealthy countries. The ability of diverse stakeholders to use algorithms could provide more holistic and comprehensive solutions to today’s challenges, such as forced migration or unanticipated pandemics. Perhaps AI can be a means for engaging and uniting people around the world on issues of mutual interest for a more peaceful and sustainable future. We should use all our multiple intelligences, and the possibilities of artificial intelligence, to achieve this end.

In our next blog post, Howard Gardner will discuss the implications of AI in understanding human personal intelligences.

I would like to thank Howard Gardner for his valuable input into this post. I am also grateful to Steven Siqueira and Martin Waehlisch for very kindly agreeing to interviews and sharing their thoughts.

Musical Intelligences: Human and Artificial

Recently I happened to read two articles about music back to back—and they generated strands of thought which may inform one another.

It’s long been realized that individuals familiar with a musical idiom have some capacity to anticipate what’s going to happen next. If it’s a surprise, it may be a pleasant one, a harsh one, a boring one, and—though rarely—one that is totally discombobulating. We speak about the amount of information in the signal and how much new information is provided by the next tone or sequence. To use examples familiar to musical aficionados, this phenomenon may explain the delight heard at the first performance of George Gershwin’s Rhapsody in Blue, the anger expressed at Igor Stravinsky’s The Rite of Spring, and the range of reactions to the final movement of Beethoven’s Ninth (Choral) Symphony.

 In the first article I read, I learned about research conducted at Aarhus University in Denmark. The puzzle: Are we more likely to anticipate a change, when a musical fragment ends; or if we are to be surprised, must we await the appearance of the unexpected?  How do we know that a musical phrase has ended and something new is about to begin? Can we recognize an ending before something new begins, so that we can start anticipating something new? Or, to shift metaphors, do we have to see the first scene of the second act to know whether it’s going to be “business as usual”. 

The researchers invoke the concept of entropy. High entropy tones are unexpected; low entropy tones are expected. Their study confirms that in listening to music, the mind (and, of course, the brain) is one step ahead of the musical signal—it anticipates the next signal and notes whether or not it meets expectations. 

 As investigator Niels Chr. Hansen describes it, “We clearly see that people have a tendency to expect high-entropy tones (ones that are relatively unexpected) as musical phrases endings.” In other words, we experience melodies that end in an unexpected way as more complete than those that end in a low entropy expected way. Participants lingered for longer on high entropy tones… as if they were expecting something new to emerge after the perceived end of phrase.

 In terms of “MI theory” one could say that individuals who are especially good at such anticipation are displaying or exhibiting musical intelligence. And we could further probe that capacity by exposing subjects to new styles of music and see how rapidly they can anticipate when a “high entropy” break in the expected pattern is about to occur.

 The second article is far grander, even grandiose. Over the last two years, musical experts have accomplished a feat that eluded the greatest of classical musical composers—they have completed Beethoven’s 10th Symphony—and in fact, a full recording of Beethoven #10 was released on October 9, 2021. If you thought Beethoven had only completed 9 symphonies, you would have been literally correct. Before his death in 1827, Beethoven just left some musical sketches for the commissioned 10thsymphony. But as the article reports “Now, thanks to the work of a team of music historians, musicologists, computers and computer scientists, Beethoven’s vision will come to life”.

In their words: “In June 2019, the group gathered for a two-day workshop at Harvard’s music library. In a large room with a piano, blackboard and a stack of Beethoven sketchbooks, spanning most of his works. We talked about how fragments could be turned into a complete piece of music, and how AI could help solve this puzzle, while still remaining faithful to Beethoven’s processes and vision.”

 How did AI solve this puzzle? The “AI” system needed to “learn” from Beethoven’s entire body of work how he might have approached and completed this final symphony. As described by the leader of the AI team “We would need to use notes and completed compositions from Beethoven’s entire body of work—along with the available sketches from the 10thsymphony—to create something that Beethoven might have written.”

 After many efforts, they did a test—appropriately, in Bonn, Beethoven’s hometown. The team printed some AI-developed scores and played them for an audience on a piano. The audience was challenged to determine where Beethoven’s phrases ended and where the AI extrapolation begins. The audience failed the test… or, as we might quip, “AI passed the test!” This process was repeated several times. Over 18 months, the team constructed two entire movements, each lasting longer than 20 minutes. Now you can listen to this human-AI team’s work and judge for yourself. 

Note that these articles both highlight the capacity to synthesize. In the first instance, listeners put together—synthesize—what they have heard to this point and make their best guess about what is to come. In the second instance, humans and AI programs each review the earlier Beethoven musical corpus and put together—synthesize—a possible new work in the Beethovian style.

As this pair of articles illustrates, our understanding of musical intelligence is being enhanced thanks to research by psychologists, neuroscientists, AI experts, and—of course—musicians and musicologists. Perhaps, thanks to these scholarly undertakings, the musical intelligence of all human beings can be enhanced. We then have to hope that, as with the case of all intelligences, they are mobilized for positive ends… as did happen with Beethoven’s Tenth Symphony.

 

REFERENCES

 The Brain Is a Prediction Machine, and Music Reveals How It Works - Neuroscience News. (2021). Retrieved 13 October 2021, from https://neurosciencenews.com/brain-prediction-music-19364/

Elgammal, A. (2021). How AI helped to complete Beethoven's 10th Symphony. Retrieved 13 October 2021, from https://www.straitstimes.com/opinion/how-ai-helped-to-complete-beethovens-10th-symphony

 Hansen, N., Kragness, H., Vuust, P., Trainor, L., & Pearce, M. (2021). Predictive Uncertainty Underlies Auditory Boundary Perception. Psychological Science, 32(9), 1416-1425.

Photo by benjamin lehman on Unsplash