Your independent source for Harvard news since 1898 | SUBSCRIBE

Your independent source for Harvard news since 1898

Features

A Science Is Born

The “yeasty” times” when computer research grew at Harvard

September-October 2020

Illustration imagining the evolution of computer science at Harvard in a playful way

Illustration by Mark Steele


Illustration by Mark Steele

Sidebars:

A partial list of those mentioned in this article

Thirty veterans of Harvard’s Aiken Computation Lab reunited on January 19, 2020, some 50 years after each of us had a role in creating today’s networked, information-rich, artificially intelligent world. Rip van Winkles who had never fallen asleep, we gathered to make sense of what had evolved from our experience as undergraduates and Ph.D. candidates during the decade 1966-1975. One thing was clear: we hadn’t understood how the work we were doing would change the world.

Harvard didn’t even call what we were doing computer science; our degrees are mostly in applied mathematics, mathematics, or physics. The University remained blind to the future of computing for a long time. I joined the faculty in 1974, right after completing my graduate work. Four years later, as a still-junior faculty member, I tried to get my colleagues in DEAP (the Division of Engineering and Applied Physics, now SEAS, the School of Engineering and Applied Sciences) to create an undergraduate computer-science degree. A senior mechanical engineer of forbidding mien snorted surely not: Harvard had never offered a degree in automotive science, why would we create one in computer science? I waited until I had tenure before trying again (and succeeding) in 1982.

But there we were, in our teens and twenties in the Aiken lab, laying some of the foundation stones on which the field has been erected.

 

No information infrastructure has been more consequential than the internet—arguably the most important information technology since Gutenberg made movable type practical. And Harvard fingerprints are on the internet’s embryo. As with so many critical advances, the circumstances were somewhat accidental.

Notwithstanding the pioneering work of professor of applied mathematics Howard Aiken in the 1930s on the Mark I, his massive electromechanical calculator, by 1960 Harvard was not a place to study circuitry or computer design. The action in hardware had moved, first to Penn and then to bigger engineering schools and industrial organizations. (For more on Aiken, see the review, “Computing’s Cranky Pioneer,May-June 1999, page 25.)

So when Ben Barker studied hardware design as a Harvard sophomore, his instructor was a part-time adjunct faculty member named Severo Ornstein. Ornstein was an engineer at the Cambridge firm of Bolt Beranek and Newman (which had been co-founded by Leo Beranek, Ph.D. ’40). BBN won a contract from ARPA (the Advanced Research Projects Agency of the Department of Defense) to design the first Interface Message Processor. The IMP (which Ted Kennedy once hilariously mischaracterized as an Interfaith Message Processor) was the electronic switching device that would glue heterogeneous host computers together to form the ARPAnet. Ornstein became the hardware engineering lead and brought his Harvard students (and College radio station WHRB engineers) Barker and Marty Thrope onto the team (see the photo below).

The Interface Message Processor team at BBN in 1969 (from left to right): Jim Geisman, Dave Walden, and Will Crowther crouch in the center; surrounding them are Truett Thach, Bill Bartel, Frank Heart, Ben Barker, Marty Thrope, Severo Ornstein, and Bob Kahn. (One key member of the team, Bernie Cossell, missed the photograph.)

Photograph coutesy of Frank Heart 

The IMP project was interesting work, but no one thought they were changing the world. Ornstein remembers that when the Request for Proposals for designing the IMP and building the first part of the ARPAnet first arrived on the project manager’s desk, “He handed it to me and said, ‘Take this home and read it and let me know what you think.’ I did so, and next morning I put it back on his desk, saying, `Looks like a straightforward engineering job; we could certainly do it, but I can’t imagine why anyone would want such a thing.’” High up the chain of command there was a vision—in 1963, while he was head of ARPA’s Information Processing Techniques Office, former Harvard research fellow J.C.R. Licklider had grandly touted the idea of an “Intergalactic Computer Network.” But the most obvious actual utility of the first IMPs was to enable a printer attached to one computer to print a document from another.

While working for BBN during his Harvard graduate studies, Barker installed the first IMP at UCLA in September 1969. Thrope, employed full time at BBN after finishing his undergraduate degree, installed the second IMP a month later at SRI (originally the Stanford Research Institute, in Menlo Park). On October 6 Barker sent the first message to Thrope across the network, which at that point consisted of nothing but those two IMPs. From two nodes the internet has grown to tens of billions of computers.

Barker does not remember what his message said. The fact that it arrived was miracle enough.

 

“Those were yeasty times,” one of our group said. Big things grew quickly out of next to nothing and shape-shifted in reaction to their environment. John McQuillan ’70, Ph.D. ’74, wrote an important dissertation laying out the way the ARPAnet could, without any central control, figure out which parts of itself were broken and route data so neither sender nor recipient would notice the failures in between. Some at ARPA viewed the internet’s capacity to survive failures as a central feature, because that promised to harden the network against nuclear attack. Others looked at the network and had other ideas.

Bob Metcalfe started graduate school at Harvard in 1969 after earning undergraduate degrees in engineering and business at MIT. When Harvard got its ARPAnet node in 1971, Metcalfe wanted to manage it. Harvard rebuffed him: that was a job for a professional, not a grad student. So Metcalfe talked his way into managing MIT’s node instead, and thereafter was seen only rarely around Harvard. Then one day in 1972 shocked whispers raced through Aiken: Metcalfe had failed his Ph.D. defense. Nobody ever fails their Ph.D. defense; it’s a symbolic and celebratory occasion, with champagne chilling outside the examination room. But somehow Metcalfe had fallen so far out of touch with his faculty committee that they walked into that room with different expectations. Metcalfe had already accepted a job at Xerox’s Palo Alto Research Center (PARC), where he moved while changing dissertation advisers. Almost simultaneously with successfully defending his revised thesis, he and a co-author at PARC published the design of the Ethernet, the networking protocol that provided connectivity to computers scattered around a building at a fraction of the cost of the IMPs that connected large systems located hundreds of miles apart. Metcalfe left Xerox in 1979 and founded 3Com to commercialize Ethernet, which became ubiquitous. (Now a professor in both the engineering and business schools at the University of Texas, he is a writer, mentor, and visionary on technology.)

 

Our reunion group members are mostly third-generation computer scientists. Aiken was Harvard’s first generation; Claude Shannon was MIT’s. These men were already legends—we never saw them during our years at Harvard. (First-generation pioneer Grace Hopper did give a memorable talk at Harvard in the early 1970s, fuming because the cabin crew on her flight had mistaken her Navy admiral’s uniform for that of a retired stewardess.)

The second generation of computer scientists included Anthony Oettinger, who was Aiken’s student, and Ivan Sutherland, who was Shannon’s. Sutherland spent three remarkable years on the Harvard faculty from 1965 to 1968 and, among other more important things, advised my undergraduate thesis.

The language of generations makes the succession sound too tidy. In the 1970s Oettinger shifted his interests toward matters of national security. Except for Sutherland, the only tenured computer scientist during our years was Tom Cheatham, who had no doctorate. He learned about computing in the army, having made the most of his assignment to keep the books at an officers’ club. Our small group inherited some intellectual traditions but also a sense that there wasn’t that much to know about the field, so anyone could contribute to it. Some walked in the footsteps of their advisers, while others came into computing from left field and brought some of that sod with them.

Cheatham had the most students, in part because he had the most money to support them, but also because his tastes were catholic and his turf was unfenced. He advised 36 Ph.D. theses, including that of Ben Wegbreit, who joined Cheatham on the faculty. Cheatham’s group worked on programming languages and the systems that made them usable. It was a hot research area because software development projects were becoming enormously expensive and might nonetheless fail spectacularly. In that environment there were established rows to hoe, if you were the row-hoeing type, but almost anything that might help could be a good thesis topic.

Cheatham set the tone for the Harvard style: bring in good people and give them a lot of responsibility and a lot of freedom—a method that one of our group reported using successfully later as a hiring strategy. Cheatham’s students had a profound influence on language design. In 1977-78, when the Department of Defense launched a competition for the design of the DoD standard language Ada, three of the four competing designs were headed by Harvard students of our era: Ben Brosgol, Ph.D. ’73; Jay Spitzen ’70, Ph.D. ’74, J.D. ’88; and John Goodenough ’61, Ph.D. ’70 (who also was on the faculty for a time).

Cheatham’s students built tools: new languages, compilers, verifiers, anything to improve the ease and quality of programming. That orientation, combined with the very primitive computing facilities available to us, oriented the non-theorists among us toward tool-building and proofs-of-concept. None of us had the ambition to build payroll systems or avionics software; we just wanted to make software that would help other people make systems like that. And nobody launched a company straight out of school—“start-up” was something you did to your car. So most of our group who did not go into academia went to companies that made software tools. It took several years before some realized that they could make a good living by building products for people who were not themselves computer geeks. Rob Shostak, having established a strong theoretical reputation during his years at SRI, launched the Paradox database system in 1985—for a while a personal computer database system very widely used in offices, including Harvard’s.

 

Bill Bossert, Arnold professor of science emeritus and former master of Lowell House, is not a computer scientist and supervised none of our Ph.D. theses. He is a mathematical biologist who uses computers. But in the words of Pat Selinger, who took his course as a junior at Radcliffe, he was “heart and soul committed to inspiring people to use and appreciate the capability of computers.”

In 1968 Bossert had the idea to teach a computer course for everyone as part of Harvard’s General Education program. It was called Nat Sci 110. The point of the course was to teach students what computers could do, not to make them skilled or employable programmers. He hired one of our group, Mark Tuttle, as head teaching fellow (TF)—not because Tuttle knew much about computer programming (he didn’t), but because he had been an undergrad at Dartmouth, which was already evangelizing computing for everyone. In its first year Nat Sci 110 drew 350 students—more than twice what had been anticipated, and too many for the lecture hall. Bossert responded not by capping enrollment, but by repeating the lecture each class day, giving it once at the scheduled hour of 11 a.m. and then again at the overflow hour of 1 p.m. Some students wanted to come to both lectures—which Bossert said was OK as long as they laughed at his jokes both times.

 

Over tea at his house, President Pusey asked, “Why do we need computers? Why are they so expensive? Why is the faculty complaining?” And so on.

Tuttle recalls being invited, as TF in this phenomenal new course, to tea at President Pusey’s house. “I’m introduced to the president,” Tuttle recalls, “who started peppering me with questions—`Why do we need computers? Why are they so expensive? Why is the faculty complaining?’ and so on. He listened intently but got quite emotional and ignored the others in the receiving line. I answered as best I could, but I am not sure how much got through. Later I learned that Oettinger and others were putting pressure on him.”

The course was a success, even though it was taught in FORTRAN, a poor instructional language, and used rented time on a commercial timesharing system. The following year Tim Standish joined the faculty, and Bossert leapt at the opportunity to use the highly flexible language Standish had designed, called PPL, for Nat Sci 110. Getting the language up and running over the summer fell largely to undergraduate Ed Taft ’73, who went on to a long career at Adobe Systems.

Nat Sci 110 changed lives, Selinger’s for one. Bored in her introductory logic course by the eminent but mumbling Pierce professor of philosophy Willard V.O. Quine, Selinger looked for a course that met in closer proximity to her 10 a.m. physics lecture so she would not always be arriving late, relegated to the back row. Thus she stumbled into Bossert’s passion for making computing interesting and fun. A few years later she finished her Ph.D. on programming languages and systems under the direction of Chuck Prenner, Ph.D. ’72, a student of Cheatham’s who had moved on from being his TF to assistant professor. Then Bill Joyner, another member of our group who had gone to work at IBM Research, aggressively recruited her. At IBM Selinger made fundamental contributions to database query optimization—the technology that makes it possible to find needles in haystacks without going through every stalk. In 1994 she was awarded IBM’s highest scientific honor, IBM Fellow. All because Nat Sci 110 was taught in a lecture hall near the physics building. Geography is destiny.

A few years later Prenner took the course over, and then I inherited it from him. I gave a shopping-season lecture in a Santa suit and drew 500 students my second year on the faculty. But in the late 1970s the Gen Ed program was disbanded and Nat Sci 110 with it; computing wasn’t a “Core Curriculum” subject. Instead, Harvard instituted a joyless and trivializing programming requirement. Students hated “the computer test,” which had the opposite effect from what Nat Sci 110 had done—and Lady Lovelace had also done more than a century earlier: show people that computers could be useful for lots of things.

Yet ghosts of Nat Sci 110 live on. Eric Roberts ’73, Ph.D. ’80, was a Nat Sci 110 TF under Bossert, Prenner, and me while he was an undergrad and grad student. He took his amazing pedagogical skills to Wellesley and then to Stanford, where he shaped the undergraduate computer-science program. Mark Tuttle took his experience to Berkeley, where starting in 1973 he taught a course on “The Art and Science of Computing” to audiences of hundreds. Now Bossert’s pedagogical grandchildren are teaching the fun of computing everywhere. Even at Harvard: Henry Leitner, who was my Nat Sci 110 TF when he was in graduate school, delights students every year with his Computer Science 1, and the dramatic flourishes in David Malan’s hugely popular Computer Science 50 can also be traced back to Nat Sci 110.

 

When some have suggested calling our field “computer sciences,” I have protested that the totality of what is known amounts to at most one science. In the 1960s the field was too small to have well-defined subdisciplines, though speciation was starting to occur. For example, computational linguistics, which has brought us Alexa and Siri, was evolving from three roots. Linguists were trying to use mathematical methods to make sense of human language. Designers of programming languages needed an engineering toolkit with which to build interpreters and compilers, so that the higher-level codes programmers wrote could be executed on real machines. And logicians had for decades studied the limits of computability, and what sorts of decisions could and could not be made by automata. The specific research problem of automated translation of Russian texts financed Cold War attempts to integrate these directions and develop new ones (though the fear of Soviet scientific supremacy ended before much success had been achieved in language translation).

These roots were all sprouting at Harvard. Oettinger worked on Russian translation. His student Susumo Kuno wrote his dissertation on automatic syntax analysis of English and became a professor in Harvard’s linguistics department. Kuno’s student Bill Woods wrote his dissertation on semantics and question-answering, and then developed his work at BBN into a system that was used during the Apollo space program to answer questions about moon rocks. In the meantime, Sheila Greibach, a Radcliffe summa, wrote her Ph.D. dissertation under Oettinger on automata and formal language theory. It was not only an important contribution to the design of parsers and compilers for computer programming languages, but a founding document of theoretical computer science—one of the first success stories of the science of computing.

With both Woods and Greibach teaching at Harvard and with BBN nearby, computational linguistics and theoretical computer science began to emerge as identifiable disciplines. Bonnie Lynn Webber was Woods’s Ph.D. student and followed him to BBN, where she continued working on semantics of natural language while remaining part-time in the graduate program. Harvard finally pressured her to finish or drop out. She chose to finish and began an extraordinary academic career, first at Penn and then at Edinburgh. My recently retired colleague Barbara Grosz, Higgins research professor of natural sciences, a leader in computational discourse analysis, has been a collaborator of Webber and of Webber’s eminent student Martha Pollack, who is now president of Cornell University.

Another of Woods’s students, Lyn Bates, Ph.D. ’75, arrived as a graduate student at Harvard knowing no one, and happened to find Woods’s door open while she was wandering the Aiken hallways. She finished her dissertation on syntax and then joined her adviser at BBN, initially sharing an office with Webber. Her research interests broadened over time; she was involved in early projects on speech understanding, use of natural language for database query, and an award-winning language synthesis project for use by the deaf.

So the research that eventually gave us talking appliances was aborning under our noses 50 years ago, but the linguists in our group emphasize that the problem of language understanding and synthesis is not nearly solved yet. Woods himself spent most of his career in industry and says he is still trying to figure out how to get computers to think.

Henry Leitner was another Woods Ph.D. student; he is now acting dean of Harvard’s Division of Continuing Education at the same time as he teaches computer science. Bill Joyner was a Woods student, too. Now retired from his long career with IBM, he provided to the reunion group his complete graduate-adviser ancestral chart, 100 percent Harvard, back not just to Aiken but another five generations before him: Aiken’s adviser was Emory Chaffee, Ph.D. 1911; Chaffee’s was George W. Pierce, Ph.D. 1900; Pierce’s was John Trowbridge, S.D. 1873; Trowbridge’s was Joseph Lovering, A.M. 1836; and Lovering’s was Andrew Peabody, A.M. 1829.

 

Ivan Sutherland’s group was full of people doing things that had never been done before. For my undergraduate thesis I wrote a processor for ordinary algebraic notation: if the user wrote an equation using superscripts and division bars, the computer would interpret it as an instruction to transform one geometric figure into another. The PDP-1 computer was a disused hand-me-down from an Air Force lab, but it had unheard-of affordances: you could sit down at its console, flip its switches, and type on its keyboard and get it to type back, with no ritual passing of IBM card decks to data processing officiants as was customary with big machines of the day. A room-sized “minicomputer,” the PDP-1 had a tablet with a stylus for writing the equation, a display for showing the equation as interpreted, and two more screens for showing the shape before and after its transformation. The code that recognized handwritten characters was written by Ken Ledeen ’67; we would today say that it did machine learning, extracting features and learning by reinforcement to classify its inputs. Ken was an English major, so instead of proclaiming that it was 92 percent sure that what you had written was a “G,” his program reported in mock-Shakespearean diction “Would that it were ‘G’”—and then invited you to correct it if your scrawl was meant to be a “C” instead.

The masterwork of Sutherland’s Harvard tenure was the first virtual-reality system. It consisted of a head-mounted display attached to a helmet that was in turn connected to the ceiling by telescoping tubes used to detect the position and orientation of the viewer’s head. The PDP-1 was programmed to display a 3-D object that seemed to hang motionless in the air while the viewer’s head moved around and through it. Wearing it was a magical experience, even though the computer was too slow to display anything more complicated than the 12 edges of a wireframe cube.

Danny Cohen’s great achievement—a flight simulator on which you could try to land a schematic airplane—spawned hugely profitable businesses.

While still an undergraduate, Bob Sproull ’68 helped design a critical part of Sutherland’s head-mounted display system. He went to Stanford for graduate school, and when BBN shipped an IMP to the university in 1970, it arrived with a note from Ben Barker to Sproull scrawled on the shipping crate. In 1973 Sproull co-authored an early and highly influential computer-graphics textbook; while at PARC he was part of the team that designed the first networked personal computer system. As part of a distinguished career in academia and industry, he was for 20 years in a leadership role at Sun Microsystems Laboratories.

Sutherland’s student Danny Cohen, Ph.D. ’69, joined the faculty and kept Sutherland’s graphics program alive at Harvard for a few more years. Cohen’s great achievement was a flight simulator. Using switches and a joystick, you could try to land a schematic airplane on a schematic landing strip. It was a fun game that spawned hugely profitable businesses building realistic flight simulators. Commercial airlines and the military, it turned out, would pay a lot of money to have their pilots crash-land a simulator rather than an actual airplane. Cohen had been a paratrooper in the Israeli military and said he built the simulator so he could learn to fly. As it was, Barker insists that he was the first one ever to land a simulated airplane safely.

Like Barker’s first ARPAnet message, it was amazing any of this worked at all. Computers were slow, expensive, and unreliable. Cohen collaborated with Bob Metcalfe to split the flight simulator’s computing load between Harvard’s PDP-1 and a faster computer at MIT, using the nascent ARPAnet to exchange partially constructed images. Even the basic ARPAnet protocols were not yet in place, so Cohen and Metcalfe had no toolkit to work with: they were pushing bits through a network that was little more than bare metal. Cohen went on to use the same real-time engineering skills for internet voice communications. Internet telephony and video (including Zoom calls) all stemmed from Cohen’s primitive flight simulator running at Harvard and at Metcalfe’s MIT node.

All PDP-1 users remember the “yen board.” We each got a certain number of yen—grad students more than undergrads, and so on. The yen board showed the hours we could sign up to use the machine—a week’s worth of 24-hour days—and we could spread our allotted yen over a segment of time, on the understanding that someone else could outbid us by allocating more yen per hour, and none of us could have more yen outstanding than our quota. Among the high-yen crowd was the visionary J.C.R. Licklider, by 1968 a professor at MIT. In our ignorance we took him for a superannuated graduate student and helped him debug his code. Naturally, those of us at the bottom of the totem pole claimed blocks in the 2 a.m. to 5 a.m. range, when our yen went the furthest, and we emerged best friends with those who had the blocks before and after us.

 

And friends we were, all of us. We supported each other, not because anybody was against us, just because we all got along. We dressed up for dinner parties together, with Julia Child prepping the chefs from her TV screen. We climbed mountains in New Hampshire, not always wisely: Peter Downey, Ph.D. ’74, now professor emeritus of computer science at the University of Arizona, remembers realizing, rather too late, that he might better have worn orange during hunting season and could have turned back sooner in the face of an incipient Mount Washington blizzard.

Joyner had never lived outside Virginia until he came to Harvard, and had trouble making himself understood. It was worse for his wife, Mary Brenda, a fellow Virginian whose job entailed reading numbers over the telephone—mostly to northerners. Joyner claims I discovered a bug in his dissertation, which he was able to fix at the eleventh hour, and also that Rob Shostak and I tried to teach the Greek alphabet to his two-year-old daughter. I know for sure Joyner taught me his mother’s whisky-sour recipe: a can of frozen lemonade, a canful of Rebel Yell bourbon, and ice, mixed in a blender to the consistency of melting snow. Drinking was a great deal more casual then, before alcohol became a controlled substance on college campuses. Wine and cheese parties every Friday afternoon in the basement of Aiken brought together undergrads, graduate students, staff, and faculty.

The group of women in our cohort was large for the time. Judy Townley, Ph.D. ’73, from the University of Texas, worked in Cheat­ham’s programming-languages group and joined him for a time in a software consultancy. Emily Friedman from Atlanta, via Cornell, became a professor at UCLA before spending most of her career at Hughes Aircraft. Radcliffe grad Brenda Baker also wrote a theoretical dissertation, but took a position at Bell Labs expecting to work on speech synthesis. She soon found “the freedom to do curiosity-driven research” and became a jack-of-all-trades in the design of algorithms for mathematical problems. During a career spanning several decades, she has published on program analysis, computer-aided design, and robotics, among other subjects. Nancy Neigus, A.M. ’70, joined Ornstein, Barker, and Thrope at BBN. Miriam Lucian, Ph.D. ’72, from Romania, was one of only two international members of our group; she went on to a long career as an engineer at Boeing, but her Harvard dissertation research in mathematical logic earned her an adjunct philosophy professorship at the University of Washington. The other immigrant in our group, Peter Chen from Taiwan, was inspired to attend Harvard by the example of industry pioneer An Wang, Ph.D.’48, inventor of magnetic core memory and founder of computer company Wang Laboratories.

For a time, the women in our group had a faculty mentor and role model in Greibach, who taught many of us. The women generally remember having been treated as equals—by us, though not always by others at Harvard, and certainly not after they left Harvard and moved into academia and industry. Independently, three different individuals in our group reported having been told by a Radcliffe dean that “girls” weren’t good at science or shouldn’t be majoring in math. It was typical of the time—as was Greibach’s departure in 1969 for a permanent position at UCLA when Harvard didn’t offer her tenure. Greibach’s student Ron Book, Ph.D. ’69, took over theory teaching from her until he too left for California, having advised six Ph.D.s as a junior faculty member, including four women: Baker, Friedman, Lucian, and Celia Wrathall, Ph.D. ’76 (who married him).

We were privileged to work with one more underappreciated giant in those days. Ugo Gagliardi never held a ladder appointment on the Harvard faculty. He had his own consulting firm and, like Woods, taught in an adjunct role as professor of the practice of computer science. Educated in Italy at a university older than Harvard and experienced with the early computer company Olivetti, Gagliardi brought a wealth of practical wisdom to the courses he taught on operating systems and related software. His student Jeff Buzen was a founding figure in the statistical modeling and analysis of computer systems; only after Buzen’s work did it become possible to estimate accurately where to spend money to expand an overloaded computer system. Buzen too joined the faculty and several of us learned computer-system design from him and Gagliardi—and from Metcalfe, when he was Gagliardi’s TF and Buzen’s reassigned Ph.D. student. Buzen and two other Gagliardi students, Bob Goldberg, Ph.D. ’73, and Harold Schwenk, Ph.D. ’72, eventually started their own very successful business, BGS Systems. Peter Chen, who studied under Gagliardi, Buzen, and industry veteran George Mealy ’51, passed on the opportunity to join the BGS team and went on to receive broad acclaim as a database scholar for his Entity-Relationship model.

And that is a good place to end the chronology of that period, because the computing world started to change dramatically soon after I joined the faculty. A smart sophomore named Bill Gates took a course from me in 1975. A building named for the mothers of Gates ’77, LL.D. ’07, and his poker buddy Steve Ballmer ’77 would eventually replace Aiken, but at this point Gates was spending most of his time on Harvard’s PDP-10 writing code for a microcomputer he neither owned nor had ever seen. Some on the faculty discouraged him; the intellectual challenges, they said, were in disk scheduling, and here he was toying with code for a machine that didn’t even have a floppy. And so, for a second time the action in computer science moved away from Harvard, this time to the West Coast, where many of our group, students and faculty alike, had already emigrated. I stayed and started to bring order to the undergraduate curriculum, while a series of junior faculty came and went, including one who later won the Turing Award but Harvard judged not promising enough for tenure. A serious intellectual resurgence would not take root at Harvard until Barbara Grosz, Michael Rabin, Watson professor of computer science emeritus, and Leslie Valiant, Coolidge professor of computer science and applied mathematics, arrived in the early 1980s.

 

So what made Aiken so generative in those days?

Part of the magic was that it was full of smart students, and the faculty “stood back and let you go,” as one of us said. But we also remember Tom Cheatham’s benevolent generosity. Tom was “an academic magnet” and “a river to his people”—in particular, he funded students’ travel. Those trips, and the regular visits to Harvard by Cheatham’s scientific collaborators, opened students’ ears to intellectual hatching noises coming from the world beyond Harvard. Tuttle remembers a specific incident that exposed a tension with which the field of artificial intelligence is still struggling. Data rules in AI today; systems get smart by generalizing from millions of examples. But for the first generation of researchers, whose computers were not large enough to store or process large data sets, AI was all about symbolic logic and automated reasoning. So when Tuttle was able to attend an AI conference in California, he witnessed “an open battle between the AI speakers—symbol manipulators all—and those in the audience from Silicon Valley who foresaw the role of statistics and probability.… [Marvin] Minsky [’50, JF ’57] and Seymour Papert tried to prevent their field from engaging in empirical—data-driven—approaches to problems. Obviously, their efforts were successful only in the short term.” What an educational experience! (What logic-based systems could do that remains a challenge for data-driven systems is to explain their decisions. It is morally untenable to have judgments about human lives—how long to incarcerate a criminal, for example—emerge from inexplicable numerical parameters magically distilled from mountains of training data.)

Colloquia with outside speakers were important for us. Big names—and names like Alan Kay that later became big—came by and talked to our little group of faculty and students. Bates and Selinger remember Hopper’s dramatic talk, with a fistful of wire segments representing nanoseconds, at a time when female speakers were rare. I remember Edsger Dijkstra, who like Kay would later win the Turing Award, advocating formal thinking with the commitment and the intolerance of a religious zealot. Mark Tuttle made sense of the principles presented at a colloquium by Jim Gray, who also would win the Turing Award, only decades later when a situation required him to make use of them.

Ironically, the growth and professionalization of our discipline and of the Harvard program have made such memorable encounters less frequent today. Theorists go to the weekly theory talk and an almost disjoint set of people go to the weekly talks on computation in society; assemblies of the whole are infrequent and unwieldy. Fifty years ago, smallness and immaturity fostered cross-fertilization and excitement.

Everybody seems to remember deeply meaningful acts of kindness by each other or by our faculty mentors. And everyone has such a story about Pauline Mitchell, the DEAP administrator who ran everything having to do with students. Mine is typical. In those pre-cell phone, pre-email days, Mitchell figured out that I was abroad, that I needed money, and that there was a postal strike in Italy. She called her brother (who worked in Harvard’s bursar’s office) and her sister (who worked at the bank in Harvard Square) to get Harvard to wire me the funds. No wonder Harvard felt like family. And that sense of security spawned freedom and creativity.

“Each of us was working on something very different,” Emily Friedman wrote, “but listening to the progress of the others pushed me on.”

We talked a lot. Perhaps the southerners loosened the tongues of the northerners, but I remember endless banter and chatter, sometimes idle and sometimes about ideas. “Each of us was working on something very different,” Emily Friedman wrote, “but listening to the progress of the others pushed me on.” We did not hesitate to tell the others what we were doing or to acknowledge what we didn’t understand. We could open up because we were not in competition. Skepticism of intellectual property prevailed, perhaps an inheritance from Howard Aiken: as his student and Turing Award winner Fred Brooks, Ph.D. ’56, put it, Aiken thought “the problem was not to keep people from stealing your ideas, but to make them steal them.” And most of us knew how to write; we were largely alumni of liberal arts colleges, and several had been humanities majors before falling into computing. Those term papers, gabfests, and teaching fellowships yielded success in countless technical pitches we gave over the years.

I am glad to have had tenure and I have defended it as essential to academic freedom. But our experience in Aiken provided little evidence that the institution was either useful or rationally awarded. Our cadre of part-time faculty with one foot in industry and the other at Harvard provided not only superb teaching but exactly the sort of continuity and institutional loyalty that tenure is supposed to promote. Woods, Gagliardi, Buzen, Mealy, Ornstein, McQuillan, Dave Walden from BBN, and others made major contributions, both scientific and educational. And some of our most memorable faculty mentors, not just Cheatham but my own Ph.D. adviser, Pierce professor of philosophy Burt Dreben, had gained permanent positions without a Ph.D. and seemed no less professorial for that deficit. The untenured “professor of the practice” title still exists, but none of today’s incumbents split their time with industry. Faculty hiring has become vastly more competitive, specialized, and systematic—and, to be sure, less inbred. Yet I am not confident that, all things considered, today’s students find their more objectively selected faculty “better” than we found our devoted irregulars.

The last word goes to the Harvard bean counter who was keeping the books on the PDP-10 in 1975 and noticed that a sophomore had used more connect hours in the month of February than there were in the month of February. Bill Gates had, it seemed, invited in some programming assistants. Concerned about how this could be explained to a federal auditor, the administrator summoned Gates for an interview and reported the outcome of his cross-examination in these deathless words: “He did not understand the ramifications of his activities.” Perhaps true of Gates and perhaps not; but the same could be said of most of us who were in Aiken during that momentous decade. And it was true of Harvard too.

 

Corrected 8/26/20: Danny Cohen, Ph.D. ’69, was an Israeli paratrooper, not a pilot. We regret the error.

Sidebars:

A partial list of those mentioned in this article

You Might Also Like:

Portrait photo of Venkatesh Murthy in his laboratory

Venkatesh Murthy

Photograph by Anna Olivella

Harvard Portrait: Venkatesh Murthy

You Might Also Like:

Portrait photo of Venkatesh Murthy in his laboratory

Venkatesh Murthy

Photograph by Anna Olivella

Harvard Portrait: Venkatesh Murthy