There is a lot of talk these days about bias. We are told that the media have a left-wing (MSNBC, Washington Post, New York Times) or a right-wing (Fox News, Wall Street Journal editorial page) bias; that employers should watch out for unconscious or implicit bias in hiring and employee evaluations; and that colleges and universities should (or shouldn’t) create a bias report system in order “to document and respond to bias-related incidents experienced by community members.”

This is not to say that concern about bias is of recent vintage. We have long thought that our criminal and civil courts are places in which justice is best served if trials are conducted by unbiased judges and verdicts delivered by unbiased juries. It is an ancient legal norm that can be found in the Old Testament, the New Testament, the Qur’an, the Magna Carta, Thomas Aquinas’s Summa Theologiae, and Blackstone’s Commentaries.

Yet not all bias is pejorative or contrary to the demands of justice. Imagine an example: the fictional case of Ms. Guided, a parent patronizing the public pool, who saves the drowning Tony (a stranger’s child) rather than the drowning Tina (her own child), because, as she explains, “When I saw that I could only save Tony or Tina but not both, and I realized that Tony’s future prospects were better than Tina’s, I chose to suppress my implicit bias, act impartially, and save Tony.” Ms. Guided’s deliberation—though seemingly rational and impartial—would be judged by most of us as callous indifference to the fate of a child to whom she has a special obligation. On the other hand, if Ms. Guided were Ms. Lifeguard, a public employee hired to protect all the pool’s patrons with impartial consideration, we would most certainly see her choice as tragic but not born of callous indifference.

As should be evident from these examples, our ideas about bias are more complicated than we may initially think. To help us navigate the philosophical puzzles that often arise in serious discussions of bias, Princeton philosopher Thomas Kelly has written a new book, Bias: A Philosophical Study. Although published by an academic imprint, Bias is remarkably accessible to educated non-philosophers. To be sure, like most philosophical writing—even when its prose is clear and its arguments straightforward, as in this case—it has to be read slowly and with great care.

Start your day with Public Discourse

Sign up and get our daily essays sent straight to your inbox.

Fundamentals of Bias

After its opening introduction, the book is divided into three main parts—(I) Conceptual Fundamentals, (II) Bias and Norms, and (III) Bias and Knowledge—each of which includes several chapters. In the first chapter, “Diversity, Relativity, Etc.,”  Kelly gives us the conceptual lay of the land, explaining to the reader the different ways in which the term “bias” can be used. One of Kelly’s most interesting observations is that bias is a relative matter: in some contexts an individual counts as biased (e.g., Frieda, while serving as a judge, is convinced the defendant is guilty pre-trial) while in other contexts the same individual does not count as biased (e.g., Frieda, while serving as a witness, is convinced that the defendant is guilty because she witnessed him commit the crime). Kelly also explores the directionality of bias (What are you biased for or against?), bias about bias (Do you have a bias in seeing bias?), biased representation (Are you biased in selecting true facts that nevertheless result in a misleading narrative?), and parts and wholes (Can a whole be biased and its parts unbiased or vice versa?).

In the succeeding chapter, one of the several issues that Kelly addresses is the relationship between unbiased outcomes and biased processes. Can unbiased outcomes come from biased processes? Presumably, yes. Suppose I am a presidential pollster who only interviews one thousand people from a small city in central California (biased process), but it turns out that the city, unbeknownst to me, happens to be perfectly representative of the population of the United States (unbiased outcome). Can biased outcomes come from unbiased processes? Presumably, yes. Suppose I ask ChatGPT to provide a randomly generated description of my philosophical work (unbiased process), but what results is highly misleading because it exaggerates the importance and influence of my scholarship (biased outcome).

Bias As Departure from a Norm

But the book’s most important chapter is chapter 3 (which begins Part II). It is where Kelly presents what he calls the norm-theoretic account of bias: “bias involves a systematic departure from a norm or standard of correctness.” The best way to understand this account is by example. A criminal court’s standard of correctness is to convict a guilty defendant and to acquit an innocent one. That is its aim. It may, on occasion, fail to achieve that end because of mistakes, misjudgments, faulty evidence, etc. But such failures can occur without bias if they are not systematic. On the other hand, if departure from the norm is the result of a faulty process, or of the judge’s belief that defendants from a particular racial group are probably guilty, then we have bias, since such a departure is systematic.

Calling this account “illuminating and fruitful,” Kelly cashes out several of its implications. Among them are: (1) it helps us understand disagreements over alleged instances of bias, including whether they are systematic, based on a real norm, or real violations of a real norm, (2) that bias attribution has a perspectival character, and (3) it allows us to say that not all instances of bias are immoral or irrational.

If you and I hold contrary political views, even though we may be equally informed and equally rational and have access to the exact same evidence and arguments, each of us is bound to think that the other one is biased.

 

Points 2 and 3 are particularly interesting. Concerning 2, consider the case of a devout religious student who believes his religion is true but refrains from claiming that believers in other religions who hold beliefs contrary to his are wrong. Kelly writes: “[The student’s] overall stance is a rationally unstable one: so long as he continues to believe the doctrines of his own religion are true, he’s bound to think that those who hold incompatible views are mistaken.” Bias works the same way, reasons Kelly. If you and I hold contrary political views, even though we may be equally informed and equally rational and have access to the exact same evidence and arguments, each of us is bound to think that the other one is biased. Thus, from my perspective you’re biased, and from your perspective I’m biased.

Although for some readers this may seem like an obvious point that should go without saying, I suspect that our current political climate would be vastly improved with a dose of the epistemic modesty suggested by Kelly’s observation. In an era dominated by both loss of trust in expertise among ordinary citizens and condescending appeals to “science” and “social justice” by experts in lockstep with the pieties of the present age, there is something to be said for cultivating in ourselves a better appreciation of our cognitive limitations.

Concerning point 3, imagine an NBA referee who is instructed by the head of ISIS to fix game 7 of the NBA finals or the referee’s family of five will be murdered. Although the referee, if he carries out the order, is biased in the pejorative sense, morality and reason require that he fix the game. On the other hand, if Ms. Guided were biased in favor of Tina (her daughter) we might not even think that it is pejorative bias since it is consistent with, and not a departure from, a widely accepted norm of parental affection.

Part III is devoted to epistemological issues concerning bias and makes helpful observations about a variety of topics, for example: bias and knowledge; knowledge, skepticism, and reliability; and bias attribution and the epistemology of disagreement.

Bias and the University

But because I cannot engage with all of Kelly’s insights, I will single out just one that may assist in one of our most heated public debates: bias in higher education. Relying on both social science research and philosophical argument, Kelly concludes that we all have a bias blind spot. When it comes to other people, we apply our theories to their particular circumstances in order to confirm their biases. On the other hand, when it comes to ourselves—when we try to detect our own biases through introspection—we typically disregard the theories we apply to others. This results in a bias blind spot. To grasp what Kelly means, let me share a story from personal experience. Years ago when I was on the faculty of the University of Nevada, Las Vegas (UNLV), a friend and colleague in communication studies told me that she was perplexed about how I could fairly cover the issue of abortion in my Contemporary Moral Issues class since I was committed to the pro-life position.  Because she was pro-choice, I asked her, “Wouldn’t you be just as perplexed about how a pro-choice professor could fairly cover the same issue?” Seemingly dumbfounded by my query, she simply replied, “But that’s different,” and turned around and went back to her office. Apparently, it had never occurred to her to attribute bias to advocates of her view that she reflexively applied to those who oppose her view.

“The net effect of this asymmetry,” writes Kelly, “is that we end up thinking that we’re less biased than other people.” In chapter 1, when discussing parts and wholes, Kelly points out that “in principle, a whole might be unbiased even if its constituent parts are biased to a high degree.” (Think, for example, of our adversarial judicial system; though virtually all the parts are biased, we are more likely to get an unbiased verdict precisely because it results from an adversarial process conducted under agreed upon rules). If we combine both these ideas—that we have a bias blind spot, and that unbiased wholes might consist of biased parts—we have the basis for developing an argument for greater intellectual diversity in America’s public universities, which have for the past three decades become increasingly more biased and more progressive.

Thanks to the progressive uniformity on most campuses, students rarely encounter in the classroom the best versions of conservative, classical liberal, and moderate ideas on contested public questions concerning race, sex, religion, speech, and identity. The political orthodoxy has calcified: in their official statements and policies, universities have installed progressive beliefs as unassailable dogmas that one may publicly question only at one’s peril.

If we combine both these ideas—that we have a bias blind spot, and that unbiased wholes might consist of biased parts—we have the basis for developing an argument for greater intellectual diversity in America’s public universities.

 

But if we accept a norm-theoretic account of bias—“bias involves a systematic departure from a norm or standard of correctness”—and take seriously Kelly’s ideas about bias blind spots and unbiased wholes with biased parts, then we should seek public universities composed of a high degree of biased parts. Such universities would intentionally hire faculty members and administrators who harbor contrary views on divisive cultural issues. This would probably create campuses that can boast of having teaching and scholarship that have much less pejorative bias than their peer institutions.

Bias: A Philosophical Study is one of those rare books that is rich in philosophical insight while having practical application. There is, of course, much more in this book than I could possibly cover in this review. What I chose to cover probably reflects my biases, which, upon introspection, are not as bad as you may think.