AI in the Academy

Cautious embrace of a new technology

Illustration that plays on the grade A+ and the term Ai

Illustration by pete Ryan

Generative artificial intelligence, which can write prose and computer code, generate audio, and create images, within the past year has become capable of producing work indistinguishable from that of humans. Generative AI, given the right prompts, can perform undergraduate-level coursework, complete graduate-level physics problems, and pass both the medical and bar exams, dean of science Christopher Stubbs explained during a late summer Zoom session with Harvard instructors in science, technology, engineering, and mathematics (STEM) fields—and that is having an abrupt impact across higher education.

In the best case, AI can be used to enhance teaching and student mastery of a subject, Stubbs said. But because those tools can solve virtually any textbook problem, instructors will need to teach their students to be “adept, responsible and ethical users of this technology,” and incentivize them to “invest in the extended intellectual effort” necessary to “gain mastery of the material.” No less profound is the potential impact in humanities courses that teach writing skills, or that rely on independently produced written work to assess student achievement. Even the admissions process, which requires an application essay, may have to change.

A preview of what changes might look like in computer science was rolled out this summer in CS50, Harvard’s large, introductory course. McKay professor of computer science David Malan has prototyped an AI that functions as an intelligent subject matter expert that can help students at any time of day or night, answering questions and providing feedback far beyond normal teaching hours. The AI is designed to give students useful prompts and reactions to their work—but without writing code outright. Ultimately, says Malan, the aim is to create an intelligent aid so adept that the result is “the approximation of a one-to-one teacher-to-student ratio.”

Tools like ChatGPT—a type of generative AI known as a large language model (LLM)—work by evaluating the statistical probability of what the next word in any phrase is likely to be, based on analysis of existing material drawn from publicly available online sources, with a little bit of randomness thrown in. That’s an oversimplification, but regardless, Malan says, ChatGPT and other AI tools are already too good at writing code (and essays) to be useful for teaching beginning computer science students: it could just hand them the answers. The AI that he and his team have built has “pedagogical guardrails” in place, so that it helps students learn how to write their own code.

The CS50 team plans to endow the AI with at least seven different capabilities, some of which have already been implemented. The AI can explain highlighted lines of code in plain English just the way ChatGPT might and tell students, line by line, exactly what the code is doing. The AI can also advise students on how to improve their code, explain arcane error messages (which are written to be read by advanced programmers), and help students find bugs in their code via rhetorical questions (“You might want to take a look at lines 11 and 12”). Eventually, CS50’s AI will be able to assess the design of student programs, provide feedback, and administer oral exams—which can then be evaluated by the human course staff reviewing transcripts of the interaction.

The technology is paired with new language outlining course expectations. Students have been told that the use of ChatGPT and other AIs is not allowed—but that using CS50’s own AI-based software is reasonable. “That’s the balance…we’re trying to strike,” Malan says. The software presents “amazingly impactful positive opportunities” but not “out of the box right now. So, we’re trying to get the best of both worlds.”

Figuring out how generative AI can elevate education across the disciplines was the subject of the annual Harvard Initiative for Learning and Teaching (HILT) conference on September 22. Provost Alan Garber noted that on a trip to East Asia about six months earlier, he had met with the CEO of a gaming company who told him that his engineers were all using ChatGPT, and that it took them about one-sixth as much time to code when they used it, and with fewer mistakes. “My guess is, in certain fields, this will be integrated into our educational programs,” he said. Looking to the future, he added, “People are going to care less and less about how good a software engineer you are without assistance, and much more about how well you can use these tools to produce great code.”

Just how generative AI will affect education broadly was the subject of debate. For students, will it become just another tool, akin to a calculator, permitted in many circumstances but not when learning how to perform multiplication and long division? How will it affect teachers? Kristina Ishmael, deputy director of the U.S. Department of Education’s Office of Education Technology, shared the assessment of Secretary of Education Miguel Cardona that teachers who use AI will eventually replace those who do not (a bold statement to which other experts took exception later in the day).

The privileging of written communication skills over oral performance, a balancing act that universities have been carrying on for centuries, may change in response to generative AI.

Pescosolido professor of Romance languages and literatures and of comparative literature Jeffrey Schnapp, who has long studied how technology can enhance pedagogy and creativity in the arts and humanities, noted that the privileging of written communication skills over oral performance, a balancing act that universities have been carrying on for centuries, may change in response to generative AI. “The role of oral, live, public speaking assignments is going to increase,” he predicted. “I don’t see that as a crisis. It is only a crisis with respect to the model that we inherited from the last half century of academic practice.” In his classes, Schnapp currently uses AI to generate multiple versions of texts and visual art, forcing students to evaluate the prompts they’ve given the AI (how closely does the output match what they intended?) and engage in critical discourse about the results.

A breakout session during the HILT conference enabled students to share their experiences with the new technology. Many reported that they use AI tools to enhance their understanding of difficult concepts. An applied math undergraduate said that AI can provide a way for students to fill the gaps in their knowledge. Another uses the technology to build outlines for papers, while a student studying epidemiology reported using it to analyze data sets to uncover new patterns of correlation. Doctoral student in musicology Siriana Lundgren explained how ChatGPT generated a 250-word paragraph about female composer Clara Schumann that highlighted how Schumann had achieved success despite her frail, womanly fingers—a bias “encoded into the AI because it had pulled from years and years of historians writing about Schumann’s frail, womanly fingers.” So she asked her students to try to track down where this historical bias originated, using the sources they had already read. Students also expressed concern that they weren’t being taught quickly enough how to use the tools, which could prove critical to their future job prospects—a remarkable sentiment for such a new technology.

Harvard has not set a University policy governing student or faculty use of AI in courses, leaving this to faculty members’ discretion, but senior administrators took steps this summer to address the privacy and data security risks associated with the technology. Because third-party AI platforms own both user queries and the resulting outputs and can incorporate them into the AI’s publicly available training data, they pose threats to copyrighted and confidential information. As a result, the University has negotiated the purchase of “walled off” versions of the tools, which are being tested in a pilot program, with broad rollout to the community anticipated later this fall. Three University-wide committees overseen by the provost’s office have begun studying AI for use in administrative processes, for research, and for teaching and learning (and the technology might even impact admissions, perhaps including a larger role for alumni interviewers). “Our strength,” said Garber, “is that we have expertise distributed throughout the community.” While marshaling that expertise takes time, he added, it results in better decisions. “Our challenge to the University is to harness the creativity and dedication of our community, and to join with other academic communities in figuring out how to work with these tools.”

Read more articles by: Jonathan Shaw

You might also like

A New Chapter for Harvard Arts

The Office for the Arts turns 50, and its longtime director steps down.

Education School Announces Interim Dean

Nonie Lesaux will serve as dean during the search for a new one.

Harvard Students form Pro-Palestine Encampment

Protesters set up camp in Harvard Yard.

Most popular

Harvard Students form Pro-Palestine Encampment

Protesters set up camp in Harvard Yard.

The Homelessness Public Health Crisis

Homelessness has surged in the United States, with devastating effects on the public health system.

Private Equity and the Practice of Medicine

Hundreds of U.S. hospitals are owned by private equity firms—does monetizing medicine affect the quality of care?

More to explore

What is the Best Breakfast and Lunch in Harvard Square?

The cafés and restaurants of Harvard Square sure to impress for breakfast and lunch.

How Homelessness is a Public Health Crisis

Homelessness has surged in the United States, with devastating effects on the public health system.

Portfolio Diet May Reduce Long-Term Risk of Heart Disease and Stroke, Harvard Researchers Find

A little-known diet improves cardiovascular health through several distinct mechanisms.