In the wake of troubling decisions—cooking the books at Enron, going to war in Iraq on suspect grounds, making mortgage loans to indigent borrowers and passing the risk on to others—scholars in many fields are examining how individuals and organizations conduct themselves relative to ethical standards. In Blind Spots: Why We Fail to Do What’s Right and What to Do about It (Princeton, $24.95), Straus professor of business administration Max H. Bazerman and Ann E. Tenbrunsel, Martin professor of business ethics at Notre Dame’s Mendoza College of Business, seek answers not in philosophy, but through analysis of cognition and behaviors, such as “ethical fading.” This excerpt is from chapter 1.
Could the financial crisis have been solved by giving all individuals involved more ethics training? If the training resembled that which has historically and is currently being used, the answer to that question is no. Ethics interventions have failed and will continue to fail because they are predicated on a false assumption: that individuals recognize an ethical dilemma when it is presented to them. Ethics training presumes that emphasizing the moral components of decisions will inspire executives to choose the moral path. But the common assumption this training is based on—that executives make explicit trade-offs between behaving ethically and earning profits for their organizations—is incomplete. This paradigm fails to acknowledge our innate psychological responses when faced with an ethical dilemma.
Findings from the emerging field of behavioral ethics—a field that seeks to understand how people actually behave when confronted with ethical dilemmas—offer insights that can round out our understanding of why we often behave contrary to our best ethical intentions. Our ethical behavior is often inconsistent, at times even hypocritical. Consider that people have the innate ability to maintain a belief while acting contrary to it. Moral hypocrisy occurs when individuals’ evaluations of their own moral transgressions differ substantially from their evaluations of the same transgressions committed by others. In one research study, participants were divided into two groups. In one condition, participants were required to distribute a resource (such as time or energy) to themselves and another person and could make the distribution fairly or unfairly. The “allocators” were then asked to evaluate the ethicality of their actions. In the other condition, participants viewed another person acting in an unfair manner and subsequently evaluated the ethicality of this act. Individuals who made an unfair distribution perceived this transgression to be less objectionable than did those who saw another person commit the same transgression. This widespread double standard—one rule for ourselves, a different one for others—is consistent with the gap that often exists between who we are and who we think that we should be.
Traditional approaches to ethics, and the traditional training methods that have accompanied such approaches, lack an understanding of the unintentional yet predictable cognitive patterns that result in unethical behavior. By contrast, our research on bounded ethicality focuses on the psychological processes that lead even good people to engage in ethically questionable behavior that contradicts their own preferred ethics. Bounded ethicality comes into play when individuals make decisions that harm others and when that harm is inconsistent with these decision-makers’ conscious beliefs and preferences. If ethics training is to actually change and improve ethical decision-making, it needs to incorporate behavioral ethics, and specifically the subtle ways in which our ethics are bounded. Such an approach entails an understanding of the different ways our minds can approach ethical dilemmas and the different modes of decision-making that result.