It is commonly believed that public policy ought to be based on scientific evidence. The idea seems so commonsensical one is tempted to ask, “What on earth was [it] based on before?” as philosopher John Worrall put it in a slightly different context. Yet, a moment’s reflection reveals the situation is more complex.

Public policy in a modern representative democracy such as ours is inevitably responsive to a host of pressures: popular opinion, constituent demands, practical constraints, local custom, institutional dynamics, interest group activity, party loyalty, political ideology, and value disagreements of all sorts. For many proponents, science-based policy offers hope that political decision-making might be insulated from such pressures and placed on a firm, objective foundation. The idea is that by basing public policy on scientific evidence, we can minimize our political disagreements and thus arrive at optimal solutions to our shared problems.

According to this view, scientific knowledge is generated by applying reason to empirical observations according to rule-like procedures that leave little, if any, room for human judgment. Scientific conclusions may be fallible, but only because of human bias or incomplete data. When successful, science offers a repository of neutral evidence insulated from the uncertainties, ignorance, and value disputes that beset our politics. Accordingly, by relying on scientific evidence, we can limit, if not eliminate, the role of judgment—and thus of deliberation—in political decision-making. Judgment and deliberation are seen as unnecessary, at best. At worst, they are treated as obstacles to implementing science-based policy.

The political rhetoric surrounding the coronavirus pandemic offers a case in point. Public figures often speak as if the correct policy interventions flow from our scientific knowledge with something resembling deductive certainty, leaving little, if any, room for doubt about the proper course of action. By “following the science,” it is assumed, we can arrive at incontestable policy solutions. Any disagreements about the most effective means to resolve the crisis or how to balance the inevitable trade-offs proposed interventions entail may then be dismissed as unscientific, even anti-scientific.

Start your day with Public Discourse

Sign up and get our daily essays sent straight to your inbox.

Scientific evidence is indeed vital to public policy. The current pandemic has made this undeniable, if it was not already obvious enough. But science does not offer a repository of neutral evidence that arrives ready-made on the political scene. On the contrary, scientific knowledge is an achievement, the result of a complex process in which the judgment of scientific experts plays a decisive role. Using such knowledge to make policy decisions is even more complex, requiring not only expert judgment but also the judgment of those nonexperts whose experience and knowledge are needed to deliberate well about the best course of action.

It follows that judgment and deliberation are not secondary, lesser processes that we must rely on when integrating scientific evidence into the policymaking process. Rather, judgment and deliberation are essential to this process—because they are essential to science itself. Failure to appreciate this risks engendering unrealistic expectations about what scientific knowledge can accomplish in political decision-making, and invites not only disappointment, distrust, and skepticism but also bad policy.

Scientific evidence is indeed vital to public policy. . . . But science does not offer a repository of neutral evidence that arrives, ready-made, onto the political scene.

 

The Role of Expert Judgment in Scientific Reasoning

To see how expert judgment plays a role in science, consider first how a scientist goes about formulating a research agenda. Which research problems are worth pursuing? When should a particular line of inquiry be abandoned as fruitless? A scientist’s choice here may be shaped by many factors, including personal, professional, financial, practical, and ethical considerations.

Is the research valuable, either in its own right or because of its potential applications? Is funding available? Is success likely in the given time frame? Is the experimental design practicable and ethical? The judgment needed to make these decisions will surely be informed by the scientist’s experience, knowledge, and familiarity with the state of the field in addition to practical considerations. Although they do not directly influence scientific reasoning, such judgments are necessary preconditions for research.

Things get more complicated when we consider methodological issues. What mathematical or experimental methods are called for by the problem at hand? Would it be better to perform a randomized, controlled trial or an observational study? Can the hypothesis be tested directly against the data, or are computer simulations necessary? What’s the best way to weigh the relative risks of false positives or false negatives in your test results? Expert judgment is needed in these cases as well. As in formulating a research agenda, such judgment may be guided by considerations including feasibility or ethics.

This suggests that expert judgment not only plays an indirect role in science, e.g., in formulating research goals; it also plays a direct role in the practice of science itself. This can be seen even more clearly by considering something as routine to scientific work as collecting and interpreting data.

How does a researcher decide if a negative reading indicates a flaw in his instrument or the absence of the phenomenon it is supposed to be measuring? Which data points should be categorized as “signal” and which as “noise”? What statistical techniques are most appropriate for modeling a given data set? Once again, expert judgment is required. Scientific data, then, are not raw observations but rather the results of a complex theoretical procedure in which expert judgment plays a role. And transforming such data into information—which conveys something scientifically meaningful about the phenomenon under study—involves yet more expert judgment.

The same goes for what is perhaps the most central aspect of scientific inquiry: testing hypotheses. During an experiment, which observations should be counted as anomalous—due to human or instrumental error or mere chance—and which as genuine counter-instances to the hypothesis being tested? When faced with a recalcitrant observation, should the researcher abandon a battle-tested hypothesis or reformulate his background theory to accommodate the data? How does he decide between rival interpretations of the same experimental findings?

Typically, these types of decisions are neither paralyzing nor irresolvable. On the contrary, they are the stuff of day-to-day scientific practice (although they can sometimes engender controversy or even spark scientific revolutions). Making such decisions requires the exercise of expert judgment. Such decisions may be aided by formal methods, heuristics, and professional best practices as well as informed by past experience, well-established theory, and empirical observations. Thus, the expert’s judgments are neither arbitrary nor subjective, nor devoid of epistemic content. But they are judgments nonetheless.

The expert’s judgments are neither arbitrary nor subjective, nor devoid of epistemic content. But they are judgments nonetheless.

 

Relying on Experts

This role for expert judgment in scientific reasoning has several implications for understanding the relationship between scientific knowledge and practical decision-making.

First, if scientific conclusions depend on expert judgment, then nonexperts who rely on such conclusions must defer to expert judgment, at least regarding the truth of the scientific conclusions in question. The nonexpert may, of course, evaluate the expert’s judgment in a kind of second-order way. She might assess the expert’s trustworthiness, for instance, by considering whether she has any reason to believe the expert could be lying to her. Or she might assess the reliability of the expert’s judgment by considering whether he possesses the relevant background and experience or is well regarded by his colleagues.

In doing this, the nonexpert is employing the same kind of good sense all of us use daily, however implicitly, in assessing our fellows’ reliability. What the nonexpert is not equipped to do is make the expert judgment in place of the expert—or even to evaluate his judgment scientifically. To make or evaluate a judgment about the best statistical model to use, how to interpret anomalous data points, or whether a hypothesis is disconfirmed by observation would require the nonexpert to be an expert, or at least something approaching it.

This is true, to varying degrees, even among scientific experts. Indeed, scientific experts must be able to take for granted conclusions established by their colleagues, on pain of infinite regress. If each scientist had to establish independently every conclusion that she took for granted in her research, she would not only have to be expert in a dizzying array of scientific fields: she would also spend her entire career trying to reinvent the wheel, rather than making any new contributions to her field.

Second, although those who rely on scientific conclusions are thus dependent on expert judgment, at least in some sense, it of course does not follow that expert judgment is infallible. It is—or should be—uncontroversial that scientific hypotheses are open to revision based on further empirical observation. Even if a hypothesis is extremely well established, it could one day be abandoned based on new empirical findings. The history of science offers an impressive array of such examples. But if expert judgment is necessary to establish scientific conclusions, then it is not just the possibility of disconfirming evidence but also the possibility of human error which may undermine scientific conclusions.

Third, if expert judgment plays a role in establishing scientific conclusions, it follows that there is room for reasonable disagreement within science. For instance, scientific experts may differ on the proper choice of experimental or mathematical method or how to interpret experimental findings. Typically, however, such disagreements are kept to a relative minimum, at least within a mature field, by disciplinary consensus, shared standards of evidence and evaluation, and solid empirical foundation. This, of course, does not mean there is no human error in mature sciences. Nor does it mean that mature sciences are immune to revision based on future empirical evidence. Mature sciences are not infallible, either.

Most of the time, these features of scientific expertise pose no significant challenges to nonexperts. That’s because many scientific disagreements have few, if any, consequences for anyone outside the relevant scholarly communities. The situation becomes considerably more complex when scientific knowledge is applied in circumstances that do have such consequences. It is not just the significance of expert judgment that grows as science moves outside the laboratory; the role of expert judgment itself grows, as does the need for a wider array of judgments and types of judgment by experts and nonexperts alike.

It is not just the significance of expert judgment that grows as science moves outside the laboratory; the role of expert judgment itself grows, as does the need for a wider array of judgments and types of judgment by experts and nonexperts alike.

 

Taking Expertise outside the Laboratory

Prudence is, as philosopher Herbert McCabe pithily put it, the “virtue which disposes us to think well about what to do.” And thinking well about what to do usually requires seeking others’ counsel. This, in turn, requires knowing whom to consult and how to weigh such advice.

For instance, it would be imprudent for you not to consult your doctor when deciding whether or how to treat yourself for a given disease. Similarly, it would be imprudent for a policymaker not to consult a scientist with relevant expertise when making a decision that requires scientific knowledge, such as whether and where to build a nuclear power plant or impose public-health measures. But it would be imprudent to consult a scientist or a doctor who lacks any relevant expertise (an entomologist rather than a nuclear physicist, say, or a podiatrist rather than an oncologist), has an obvious conflict of interest, or is a known liar or quack.

It also takes prudence to know how wide the sphere of deliberation should be. Most of the time, when there is uncertainty about how to proceed within science—for instance, when experts disagree about which hypothesis is best supported by evidence—deliberation will naturally exclude not only nonscientists but also most scientists who are not familiar with or engaged in the relevant subfield. In other words, the sphere of deliberation will be small and relatively homogeneous. By contrast, making policy decisions informed by scientific evidence requires consulting not only the relevant experts but also those nonexperts whose knowledge and experience are needed for effective decision-making. In these circumstances, the sphere of deliberation may be quite large and heterogeneous.

Deliberation is a reciprocal process with mutual obligations. Nonexperts must be willing to trust expert judgment, although not blindly or uncritically. Experts, for their part, must be willing to make recommendations while being open and honest about the possibility of error, the role of judgments, and the nature and extent of disagreement in their fields. Both must be realistic about what scientific evidence can achieve and the level of certainty it can attain. Failure to do so will only exacerbate unease, inflame distrust, and breed resentment.

Above all, policymakers must exercise good judgment when deciding on the best course of action. Scientific evidence is essential to this process, but it is rarely dispositive. Science, in other words, illuminates rather than eliminates the need for judgment and deliberation—especially when it comes to science-based policy.

 

This essay is adapted from The Role of Judgment and Deliberation in Science-Based Policy, which is forthcoming from the American Enterprise Institute.