A Google search returns about 350,000 hits for “war on science.” Glancing through the first hundred results reveals that this “war” consists mainly of political posturing. Little of it directly concerns science. Nevertheless, such rhetoric can result in auxiliary harm to science by inclining scientists to adhere to acceptable lines in order to further their careers or avoid castigation. The degree of harm will depend on the dedication of scientists and their intrinsic desire to gain knowledge. Still, although it is worrisome, sophomoric banter does not directly attack the integrity of science.

More alarmingly, science has been under siege for more than half a century from a very different set of forces. This assault is not rooted in unlearned political commentary, but in the attitudes of scientists themselves.

**What Is Scientific Truth?**

Modern science emerged in the seventeenth century with two fundamental ideas: planned experiments (Francis Bacon) and the mathematical representation of relations among phenomena (Galileo). This basic experimental-mathematical epistemology evolved until, in the first half of the twentieth century, it took a stringent form involving (1) a mathematical theory constituting scientific knowledge, (2) a formal operational correspondence between the theory and quantitative empirical measurements, and (3) predictions of future measurements based on the theory. The “truth” (validity) of the theory is judged based on the concordance between the predictions and the observations. While the epistemological details are subtle and require expertise relating to experimental protocol, mathematical modeling, and statistical analysis, the general notion of scientific knowledge is expressed in these three requirements.

Science is neither rationalism nor empiricism. It includes both in a particular way. In demanding quantitative predictions of future experience, science requires formulation of mathematical models whose relations can be tested against future observations. Prediction is a product of reason, but reason grounded in the empirical. Hans Reichenbach summarizes the connection: “Observation informs us about the past and the present, reason foretells the future.”

The demand for quantitative prediction places a burden on the scientist. Mathematical theories must be formulated and be precisely tied to empirical measurements. Of course, it would be much easier to construct rational theories to explain nature without empirical validation or to perform experiments and process data without a rigorous theoretical framework. On their own, either process may be difficult and require substantial ingenuity. The theories can involve deep mathematics, and the data may be obtained by amazing technologies and processed by massive computer algorithms. Both contribute to scientific knowledge, indeed, are necessary for knowledge concerning complex systems such as those encountered in biology. However, each on its own does not constitute a scientific theory. In a famous aphorism, Immanuel Kant stated, “Concepts without percepts are blind; percepts without concepts are empty.”

**All Scientific Theories Are Contingent**

Validation is the salient epistemological issue for a scientific theory. It confronts us with two profound aspects of scientific knowledge: inter-subjectivity and uncertainty. Mathematics, experimental protocols, and validation criteria are universally understandable. However, the choice of validation criteria is subjective. Hence, while not being objective, in the sense that there is universal agreement on validity, a scientific theory is inter-subjective, in the sense that there is universal understanding of the theory and the criteria for its truth. In addition, since many complex systems are modeled by random processes, and measurement procedures exhibit randomness, uncertainty is inherent to scientific theories.

Consider the following simple validation criterion. There is a value *A* and a measurement *X,* such that the theory will be accepted if and only if *X* < *A*. Even if we agree on the criterion’s form, we may not agree on the acceptance value *A*. If one person chooses a large *A* and another chooses a small one, then the theory may be acceptable to the former but not to the latter. Indeed, given the possible values of the measurement, one might choose *A* so large that the theory will be accepted no matter the measurement and the other might choose *A* so small that the theory will be rejected to matter the measurement. Even if both agree on *A*, on account of randomness, the measurement *X* may sometimes result in acceptance and other times result in rejection.

Owing to uncertainty, concordance between predictions and observations involves statistical analysis, and the degree of acceptance of a scientific theory must itself be quantified. Since statistical accuracy is necessary, statistical tests and estimations void of a mathematical theory describing their own accuracy are useless. Unfortunately, use of statistical-looking methods lacking any theoretical basis relevant to the problem under consideration is ubiquitous.

Because a model is validated by testing predictions, even when it is accepted, a scientific theory remains contingent, standing open to rejection arising from new observations. In science, the occurrence of anomalies is not an anomaly.

**How to Evaluate a Scientific Theory**

In the spirit of David Hume’s *Enquiry Concerning Human Understanding*, when presented with a scientific theory, one should ask four questions:

- Does it contain a mathematical model expressing the theory?
- If there is a model, does it contain precise relationships between terms in the theory and measurements of corresponding physical events?
- Does it contain validating experimental data—that is, a set of future quantitative predictions derived from the theory and the corresponding measurements?
- Does it contain a statistical analysis that supports the acceptance of the theory, that is, supports the concordance of the predictions with the physical measurements—including the mathematical theory justifying application of the statistical methods?

If the answer to any of these questions is negative, “Commit it then to the flames: for it can contain nothing but sophistry and illusion.” Tempering David Hume’s hyperbole, if a theory constituted by a mathematical model lacks sufficient experimental validation, it can still have great value. Obviously, every accepted theory was at one time a mathematical model awaiting validation. Moreover, the model might represent a stage on the way to a satisfactory theory or a preliminary conceptualization that suggests further experimentation.

**The Illusion of Big Data**

The flavor of the day is empiricism, and its latest incarnation is Big Data. This term refers to massive data sets often collected with no objective in mind, the idea being that with sufficient computing power one can mine the data for relationships. Some argue that no theory is needed, because scientific knowledge will emerge directly from the data unbiased by questions posed from human understanding—as if data-mining algorithms arise *ex nihilo*.

Ignoring the most extravagant claims surrounding Big Data, it is well known that more data are not necessarily better. If new data have negligible information regarding the formulation of a model, then adding them to existing data can yield poorer inference. Noise in the new data can obscure useful information in the old data while not providing additional useful information. A bigger data set is not necessarily a better data set.

It has been argued that Big Data presents us with the ability to pursue data-driven science, in which the scientist is aided in his theorizing by data. Lest one be tempted to believe there is novelty here, it might be chastening to recall that Nicolaus Copernicus used data collected by Claudius Ptolemy about 1,400 years earlier to develop his heliocentric theory.

**What Do String Theory and Intelligent Design Have in Common?**

Having criticized empiricism, let us consider rationalism. William Dembski, a prominent proponent of intelligent design (ID), recognizes that ID is not science. It contains no mathematical model and, *ipso facto*, no concordance between the theory and future predictions.

Yet the fact that ID is not science does not mean that it lies outside of the realm of reasoned discussion. In the *Critique of Practical Reason*, Immanuel Kant writes, “I see before me order and design in nature, and need not resort to speculation to assure myself of their reality, but to explain them I have to presuppose a Deity as their cause.” Kant does not consider this a scientific argument. Rather, he believes that “this presupposition . . . is the most rational opinion for us men.” Accepting ID as a kind of science would require the abandonment of both mathematical formulation and prediction.

Because it concerns a designer external to nature, ID is *ipso facto* metaphysical rather than scientific in scope. Certainly, this cannot be said of physics. Nevertheless, Nobel laureate physicist Burton Richter writes, “Simply put, most of what currently passes as the most advanced theory looks to be more theological speculation, the development of models with no testable consequences.” Regarding string theory, in *The Trouble with Physics**,* Lee Smolin writes that it “has failed to make any predictions by which it can be tested, and some of its proponents, rather than admitting that, are seeking leave to change the rules so that their theory will not need to pass the usual tests we impose on scientific ideas.”

In the main, one should be wary of grand scientific theories. For Aristotle, there was no demarcation between physics and metaphysics. That changed with modern science. Whereas metaphysics explains the big picture, science is restricted to mathematical models and a notion of truth grounded in the predictive capacity of those models. This is a demarcation, not a negative criticism of either metaphysics or science.

**Changing the Rules of Science**

Rationalism and empiricism both aim at changing the rules to return to a more primitive, pre-Galilean conception of science in which the demands of knowledge are softened by weakening the relationship between theory and observation.

And the rules *are *changing—not via reasoned analysis, but *de facto*. The centuries-long debate among Bacon, Galileo, Hume, Kant, Mill, Einstein, Bohr, and others is being virtually ignored, and the “scientific” literature is becoming a hodgepodge of methods, computations, and explanations whose acceptability is little more than a matter of fancy. Pseudo-statistical data crunching has come to be loudly proclaimed as “science” in those parts of the academy, industry, and government where radical empiricism rules the roost, and human reason has been abandoned in favor of massive unplanned data collection and prodigious computations whose meaning, if there is any, is shrouded in mystery.

The evisceration of its epistemology constitutes the real war on science, and this war is aimed directly at its vitals.

*Edward Dougherty is Distinguished Professor of Electrical and Computer Engineering at Texas A&M University and Scientific Director of the Center for Bioinformatics and Genomic Systems Engineering.*