Language as a Litmus Test for AI

And the problem of bias

Illustration by OstapenkoOlena/iStock by Getty

Return to main article:

Language, which clearly played an important role in human evolution, has long been considered a hallmark of human intelligence, and when Barbara Grosz started working on problems in artificial intelligence (AI) in the 1970s, it was the litmus test for defining machine intelligence. The idea that language could be used as a kind of Occam’s razor for identifying intelligent computers dates to 1950, when Alan Turing, the British scientist who cracked Nazi Germany’s encrypted military communications, suggested that the ability to carry on a conversation in a manner indistinguishable from a human could be used as a proxy for intelligence. Turing raised the idea as a philosophical question, because intelligence is difficult to define, but his proposal was soon memorialized as the Turing test. Whether it is a reasonable test of intelligence is debatable. Regardless, Grosz says that even the most advanced, language-capable AI systems now available—Siri, Alexa, and Google—fail to pass it.

The Higgins professor of natural sciences has witnessed a transformation of her field. For decades, computers lacked the power, speed, and storage capacity to drive neural networks—modeled on the wiring of the human brain—that are able to learn from processing vast quantities of data. Grosz’s early language work therefore involved developing formal models and algorithms to create a computational model of discourse: telling the computer, in effect, how to interpret and create speech and text. Her research has led to the development of frameworks for handling the unpredictable nature of human communication, for modeling one-on-one human-computer interactions, and for advancing the integration of AI systems into human teams.

The current ascendant AI approach—based on neural networks that learn—relies instead on computers’ ability to sample vast quantities of data. In the case of language, for example, a neural network can sample a corpus—extending even to everything ever written that’s been posted online—to learn the “meaning” of words and their relationship to each other. A dictionary created using this approach, explains assistant professor of computer science Alexander “Sasha” Rush, contains mathematical representations of words, rather than language-based definitions. Each word is a vector—a relativistic definition of a word in relation to other words. Thus the vectors describing the relationship between the words “man” and “woman” would be mathematically analogous to those describing the relationship between words such as “king” and “queen.”

This approach to teaching language to computers has tremendous potential for translation services, for developing miniaturized chips that would allow voice control of all sorts of devices, and even for creating AIs that could write a story about a sporting event based purely on data. But because it captures all the human biases associated with culturally freighted words like “man” and “woman,” and what the ensuing mathematical representations might embody with respect to gender, power dynamics, and inequality when confronted with the associations of a word such as “CEO,” it can lead neural-network based AI systems to produce biased results.

Rush considers his work—developing language capabilities for microscopic computer chips—to be purely engineering, and his translation work to be functional, not literary, even though the goal of developing an AI that can pass the Turing test is undoubtedly being advanced by work like his. But significant obstacles remain.

How can a computer be taught to recognize inflection, or the rising tone of words that form a question, or an interruption to discipline kids (“Hey, stop that!”), of the sort that humans understand immediately? These are the kinds of theoretical problems Grosz has been grappling with for years. And although she is agnostic about whatever approach will ultimately succeed in building systems able to participate in everyday human dialogue, probably decades hence, she does allow that it might well have to be a hybrid of neural-network learning and human-developed models and rules for understanding language in all its complexity.

Read more articles by: Jonathan Shaw

You might also like

Talking About Tipping Points

Developing response capability for a climate emergency

Academia’s Absence from Homelessness

“The lack of dedicated research funding in this area is a major, major problem.”

The Enterprise Research Campus, Part Two

Tishman Speyer signals readiness to pursue approval for second phase of commercial development.  

Most popular

AI as Cancer Oracle?

How is artificial intelligence (AI) being used for cancer detection and prevention?

The World’s Costliest Health Care

Administrative costs, greed, overutilization—can these drivers of U.S. medical costs be curbed?

Claudine Gay in First Post-Presidency Appearance

At Morning Prayers, speaks of resilience and the unknown

More to explore

What is the Best Breakfast and Lunch in Harvard Square?

The cafés and restaurants of Harvard Square sure to impress for breakfast and lunch.

How Homelessness is a Public Health Crisis

Homelessness has surged in the United States, with devastating effects on the public health system.

Portfolio Diet May Reduce Long-Term Risk of Heart Disease and Stroke, Harvard Researchers Find

A little-known diet improves cardiovascular health through several distinct mechanisms.