The Sanctity of the Classroom, Continued
There have been further developments since an early-November Faculty of Arts and Sciences (FAS) discussion of previously undisclosed photographic monitoring of classes to study students’ attendance patterns. This report covers:
- a message sent by Carswell professor of East Asian languages and civilizations Peter K. Bol, who is vice provost for advances in learning, to students in the classes that were photographed and analyzed [Note: Bol is a member of the Harvard Magazine Inc. board of directors.];
- The Harvard Crimson’s report that attendance data had in fact been photographically recorded from 29 classes in total—not 10, as originally disclosed; and
- a description of the research undertaken using the photographs, and of preliminary findings.
During the November 4 FAS meeting, Gordon McKay professor of computer science Harry R. Lewis, a former dean of Harvard College (and just appointed interim dean of the School of Engineering and Applied Sciences), rose from the floor to ask about photographs taken during courses to study attendance: research conducted under the auspices of the Harvard Initiative for Learning and Teaching (HILT), but without benefit of prior notice to the professors or students involved. The disclosure was news to a community that had expressed concerns about electronic encroachments on privacy (a field about which Lewis has written, and a sensitive area, given the 2012-2013 investigations of resident deans’ e-mail accounts, and the subsequent issuance of uniform University policies on such matters). The photographic research also seemed at odds with the usual protections accorded FAS classrooms. Lewis asked that the students and faculty “who were the subjects of this nonconsensual study” be informed that they were “under photographic surveillance.”
In response, Bol, who as vice provost oversees HILT, explained that he wished to learn more about how students spent their time and what their teachers expected of them, but found little real data on whether attendance at classes was slipping, as rumored. He explained how the photographic study was organized, how the protocol was reviewed, and the processing of the data collected: essentially, automated counting of whether seats in a lecture hall were filled or empty.
The Vice Provost’s Message
On November 12, Bol e-mailed the following message to students in the affected classes:
Under the auspices of my office, a research study on attendance patterns took place in the spring of 2014.
The methods of the study involved photographs of lecture halls and a computer algorithm to differentiate filled from empty seats with no identification of any individual student, and subsequent destruction of the underlying images. The researchers involved in this study do not know who was enrolled in the courses that were photographed.
I am writing to inform you – utilizing a blind email list – that you were enrolled in one of the courses for which attendance patterns were analyzed. Note: Images for the study were captured only of the physical classroom seats being photographed.
I have given a link to my statements to the FAS faculty in response to a question posed by Professor Harry Lewis to describe the purpose of the study and the reasoning behind the methods.
Ultimately, we hope that this and other research to understand student behavior will help to improve teaching and learning.
If you have any specific questions or concerns about the study, I would be happy to speak with you….
Other Classrooms, Other Voices
As the Crimson reported on November 14, more courses were in fact recorded, beyond the 10 that had been widely discussed. A message to the Crimson (subsequently made available to Harvard Magazine) from associate provost for institutional research Erin Driver-Linn, the director of HILT, and Samuel Moulton, director of educational research and assessment (who first alluded to the study findings during HILT’s annual conference on September 16), provided details:
As detailed earlier, the study involved classes that took place in four lecture halls in the spring semester. The preliminary findings presented at the 2014 HILT Conference and in the information provided yesterday included data derived from 10 high-enrollment courses that took place in those particular lecture halls and involving up to 2,000 students. Overall, there were 29 courses (22 College/GSAS and 7 DCE [Graduate School of Arts and Sciences and Division of Continuing Education]) that met regularly in these lecture halls during the spring 2014 semester.
In early September, after months of developing the algorithm, the researchers determined that it was in fact possible to accurately extract attendance estimates. Vice Provost Bol then scheduled meetings with course faculty to discuss whether or not they wanted to have their course data fully analyzed and included in a report of de-identified results. After faculty agreed, Sam Moulton reached out to those faculty to ask clarifying questions, interpret any unusual attendance estimates, and thus refine the analysis. By the time of the HILT conference on September 16th, 10 course instructors had provided their consent and the analysis of their data had been refined sufficiently such that they could be reported. Since that time, Vice Provost Bol has met with most (but not all) other course instructors, and Sam has been able to then follow-up. Analysis for the courses beyond the initial 10 presented at the conference (and picked up by the press) has not been completed and may not ever be completed.
The underlying imagery data (from which attendance estimates were drawn) have been destroyed for all courses. The derived data for the 12 College/GSAS and 7 DCE courses not included in the research presented at the HILT conference have not yet been fully analyzed. Only courses whose faculty agree to participate and for which data is clearly interpretable will ever be reported. In the spirit of full disclosure, however, all students enrolled in any of the 29 courses were informed that their photograph was taken.
Whether or not additional course data are analyzed and reported, this research was never meant to bring scrutiny to individual courses, faculty, or students, nor was it ever meant to judge individual courses or faculty. We believe that doing so will bring no benefit to student learning or faculty teaching. The goal has consistently been to understand lecture attendance in order to be able to ultimately improve student engagement and learning. We ask again that you do not identify faculty unless they specifically have agreed to be identified.
The Attendance Study
HILT has now posted “Lecture attendance research: Methods and preliminary findings,” dated November 2014.
It details the use of GoPro cameras in the lecture halls (the Crimson identified them as Science Center facilities); seven months of systems development to create the algorithm to distinguish empty from full seats, for use in analyzing the images captured (complete with a reconciliation adjustment for “the week of Daylight Savings change”); and pairing of image data with information on lectures from course websites and syllabi.
The graphs for each of the 10 courses analyzed show patterns of attendance varying by day of the week, the incidence of quizzes and exams, optional class meetings (a dud, in terms of attendance, even when a movie was on offer), and progress through the semester (during which, attendance generally dwindled).
Summing findings from the 10 classes, the data showed that:
- On the average, 60 percent of students attended any given lecture.
- Moreover, there was incredible variability between courses in their average attendance: Some courses had as low as 38 percent attendance (over the semester), whereas others had as high as 94 percent.
- Lecture attendance declined over the semester, starting at 79 percent and ending at 43 percent.
- There was also loss of attendance over the week: 5 percent from Monday to Wednesday or Tuesday to Thursday, and 10 percent from Wednesday to Friday.
The two classes that commanded the highest attendance throughout were both premedical requirements. Overall, “[C]ourses that measured and graded attendance had higher attendance than those that did not (87 percent vs. 49 percent, respectively). Second, courses in which students self-reported enrolling to fulfill a premed requirement had much higher attendance than other courses.
"Other reasons for taking the courses (e.g., elective vs. General Education requirement) did not show significant effects, nor did time of day, day of week, published Q ratings [student course evaluations], or the availability of lecture videos.”
Strengthening Pedagogy: Some Perspective on Harvard’s Strategies
Beyond the important questions of privacy, there is of course a need for useful research and experimentation to improve the effectiveness of teaching. And in fact, for more than a decade, faculty members, various schools, and Harvard as a whole have grappled with concerns about the quality of teaching, what students learn and how, and the evolution of technology that may be applied to the classroom.
Some productive efforts have resulted from individual professors’ interest and initiative (see Harvard Magazine’s profile of Eric Mazur’s early work on engagement within the classroom, rather than teaching purely through lecturing). In the sciences, concern over the persistent underrepresentation of women and minority students has led to concerted efforts to enhance teaching and engage students. Cabot professor of biology Richard M. Losick, to cite a prominent example, has championed efforts to redesign introductory courses; develop problem-focused learning backed with multimedia tools; build professor-teacher research partnerships; and generally raise the status of teaching to the level accorded research within university settings.
Technology has both broadened the opportunities and complicated the challenges during the past decade. The University announced a significant investment in massive open online courses (MOOCs) in the spring of 2012—a technological means of disseminating teaching broadly and, presumably, bringing some of those pedagogical lessons back to the classroom. In the same academic year, a $40-million gift launched HILT, holding forth the promise of institutional support on a whole new scale for individual professors’ and schools’ pedagogical-reform efforts. Both HarvardX and HILT have encouraged experimentation with teaching technologies; both are also supporting technologically enabled research on what happens in teaching and learning transactions—albeit at the fairly basic level, to date, of the HILT study on raw attendance at class lectures, and HarvardX reports on how many people register for online courses (sometimes, many tens of thousands) and how many actually complete all the assignments (typically, a small fraction). As the researchers themselves note, their studies are still in their infancy.
In the meantime, of course, the gains available from teaching technologies may be offset by the technologies distracting students in other ways. Are all those students with their laptops open in class taking notes—or scanning Facebook or making reservations for a ski vacation? In an intriguing presentation at the September HILT conference, Malia Mason, Gantcher associate professor of business at Columbia Business School, talked about “The Battle for Mindshare.” She was able to document exactly how much time students spent during the course of a 24-hour day on trolling social media, how much they enjoyed doing so, how severely that cut into their discretionary time for other activities (i.e., studying)—and how often they misperceived how they actually invested their scarce hours. Thus, technology enabled her to study the technologically mediated diversion from learning.
For all this higher-level activity, and the broad interest in technology (for teaching, and learning about teaching and learning effectiveness), there is still much to be done on a simpler level. Lee Shulman, a professor at Michigan State and Stanford and a president of the Carnegie Foundation for the Advancement of Teaching, famously wrote an essay entitled “Teaching as Community Property.” His manifesto called for subjecting pedagogy to structured, documented analysis, peer review, and revision. In other words, making teaching (the creation of a syllabus, the work done in the classroom) a collective enterprise, not a solitary one—every bit as subject to open observation, criticism, and correction as research is.
As reported, Losick’s department is one of the few within FAS where “peer review” of teaching practice is in place—the sort of activity Shulman advocated. The results, Losick said in 2011, benefit not only the junior professors who are developing their classroom skills, but also tenured professors who are exposed to colleagues’ successes. Deploying such practices broadly and making the most of the University’s potential as a center for educational excellence, he suggested, depends on “inspiring leadership” from department chairs, deans, and the president.
There are pockets of such practices around Harvard. The Business School, for one, inculcates the skills of its case method in all its faculty members, with classroom observation, taping and subsequent evaluation of teaching, and so on—perhaps especially valuable for professors who come to the school from liberal-arts disciplines such as economics or history. Classroom successes and pitfalls are frequently compared in teacher meetings before each session of the multi-unit, required, first-year M.B.A. courses.
That kind of rigorous focus on a single teaching method would not apply readily to a multidisciplinary faculty like FAS, with its dozens of diverse departments and fields. But the commitment to the human enterprise of learning about teaching and learning certainly has to rank right up there with the technological marvels of producing a full-blown HarvardX course—even when the online materials are then recast to “flip” a classroom, with students viewing lectures on their computers and then using class time to work through problems together. The dollar costs are no doubt far lower, and the classroom return on the investment would likely be very high. Engaging faculty members in that demanding work, with its high personal stakes and personal vulnerability, surely is as high a priority as other University investments in teaching and learning.
If the conversations Peter Bol has had with faculty members whose lecture courses were photographed to study attendance now prompt broader interest in voluntary classroom observation, peer review, critique, and analysis, that could be a valuable outcome from the many months of taking pictures, creating an algorithm to analyze them, and graphing the data for HILT’s report.