There has been much debate concerning whether or not the period 1998–2013 experienced a hiatus in global temperature rise. Global warming skeptics point to the asserted hiatus. This is not surprising, since proponents of global warming had predicted warming over the same period. The basis of validation, which establishes the truth of a scientific theory, is agreement between predictions drawn from theory regarding future observations and the observations themselves. From this perspective, if the predicted temperature increases have not materialized, then the theory has been invalidated.
Both sides consider the issue important but disagree on whether the data show a hiatus. Such disagreements can arise based on what data are used and how the data are filtered to correct for bias and noise. Different people may filter data differently, depending on their assumptions regarding the measurement process. Good validation methodology requires decisions on filtering to be made prior to analysis. Unfortunately, this appears not to have been the case. According to Nature News, “The previous version of the NOAA data set had showed less warming during the first decade of the millennium. Researchers revised the NOAA data set to correct for known biases in sea-surface-temperature records.”
But if it does exist, would a hiatus in rising temperatures really invalidate the theory of global warming? Indeed, is it even possible for the theory to be invalidated? That is, can criteria be established and observations made so that the theory can be accepted or rejected based on the degree to which predictions and observations satisfy the criteria? This is a deeper question. It is the establishment of such criteria and the possibility of testing a theory with respect to future observations that makes a theory scientific.
Is Climate Science a Science?
In 2007, a paper was published in the Philosophical Transactions of the Royal Society A by Claudia Tebaldi of the National Center for Atmospheric Research in Boulder, Colorado, and Reto Knutti of the Institute for Atmospheric and Climate Science in Zurich, Switzerland. Tabaldi and Knutti state the fundamental problem with climate science in a single paragraph:
The predictive skill of a model is usually measured by comparing the predicted outcome with the observed one. Note that any forecast produced in the form of a confidence interval, or as a probability distribution, cannot be verified or disproved by a single observation or realization since there is always a non-zero probability for a single realization to be within or outside the forecast range just by chance. Skill and reliability are assessed by repeatedly comparing many independent realizations of the true system with the model predictions through some metric that quantifies agreement between model forecasts and observations (e.g. rank histograms). For projections of future climate change over decades and longer, there is no verification period, and in a strict sense there will never be any, even if we wait for a century . . . climate projections, decades or longer in the future by definition, cannot be validated directly through observed changes. Our confidence in climate models must therefore come from other sources.
After opening with a slightly vague statement concerning predictive skill about a “model”—rather than a clear statement about knowledge—Tibaldi and Knutti unequivocally state that climate models cannot be scientifically validated and give ironclad reasons why this is so. The paragraph is part of a well-thought-out discussion of modeling, experimentation, and, most importantly, statistics in climate studies.
The closing sentence is telling: Confidence, not knowledge, must come from sources not involving validation via observation. Tebaldi and Knutti go on in the article to discuss other sources of confidence and their shortcomings. But never do they vary from the crucial point that climate modeling cannot be scientifically validated.
Although this is not the venue to go into details behind their reasoning, a couple of general points can be made. First, if a theory is probabilistic—meaning that it does not provide a prediction of outcome for any specific observation but instead gives a probabilistic distribution of possible observations—then the theory cannot be validated or invalidated by any single observation.
Second, contemporary scientists and engineers model highly complex systems involving thousands of variables and thousands of model-defining parameters. Owing to their sheer number, many model parameters cannot be experimentally determined, so they are subjectively entered into the model or left uncertain. As a consequence of this uncertainty, there are an infinite number of physical systems described by an infinite number of models. A subset of these is somehow averaged to provide a description of the phenomena. It is difficult to characterize how this averaging should be done and to quantify the level of uncertainty involved. This can all be done in what is known as a “Bayesian” framework, which provides mathematical rigor, while incorporating uncertainty; however, one is still left without the possibility for scientific validation.
Towards a Scientific Theory of Complex Systems
None of this is peculiar to climate science. Rather, complex modeling and computer simulation are ubiquitous across diverse fields such as biology, economics, social science, physics, and engineering. For instance, last summer a number of scientists, engineers, and philosophers participated in a workshop in Hanover, Germany, which was called How to Build Trust in Computer Simulations – Towards a General Epistemology of Validation. What does it all mean?
Basically, in the realm of natural phenomena, our desire to know has outstripped our understanding of what it means to know.
This is not an unusual situation. Major improvements in observational apparatus and computation inevitably lead to a desire for greater knowledge, to the point where the current theory of knowledge may be inadequate. The salient example is causality. For two thousand years following Aristotle, causal descriptions were considered necessary for scientific knowledge. Isaac Newton changed that with three words: “Hypotheses non fingo.” In English, “I frame no hypotheses.” Science concerns mathematical representation of behavior, not metaphysical notions behind the behavior. In his Philosophiae Naturalis Principia Mathematica, Newton is absolutely clear: “For I here design only to give a mathematical notion of these forces, without considering their physical causes and seats.”
A New Epistemological Crisis
Today, owing to our desire to model complex phenomena and the concomitant inability to obtain sufficient data and to compute at the scales desired, we face a new epistemological crisis. I believe there are four basic options: (1) dispense with modeling complex systems; (2) model complex systems and dishonestly claim that the models are scientifically valid; (3) model complex systems, admit that the models and predictions are not scientifically valid, utilize them pragmatically where possible, and be extremely prudent when interpreting them; or (4) put in the effort to develop a new scientific epistemology, recognizing that success within the next half century would be wonderful.
As one who studies decision making in the context of complex systems, I advocate options three and four: Continue to develop an approach using classes of uncertain models, with full recognition of the consequences of uncertainty, and at the same time strive towards a theory of scientific knowledge incorporating uncertainty in a rigorous epistemological framework. Option two is repugnant to science and detrimental to humanity, because it would lead to facile policy decisions by people who have no understanding of the issues. Option one would ignore major problems in medicine, physics, economics, etc., that have substantial impact on the human condition.
Incorporating uncertainty directly into scientific models may force us to live with a “weaker” form of scientific knowledge, but we have done so before. It was difficult to accept the understanding that science does not concern reality. But following Newton, such a conclusion was inescapable. Hume and Kant put the final nails into the coffin of naïve scientific realism. This did not prevent the advancement of science; indeed, it could be argued that, freed from metaphysics, natural science was free to pursue theories that were not in concordance with our physical intuition.
In QED: The Strange Theory of Light and Matter, Richard Feynman states the matter beautifully:
[Physicists] learned to realize that whether they like a theory or they don't like a theory is not the essential question. Rather, it is whether or not the theory gives predictions that agree with experiment. It is not a question of whether a theory is philosophically delightful, or easy to understand, or perfectly reasonable from the point of view of common sense.
This recognition could never have come about had science remained within an Aristotelian framework.
Are We Up to the Challenge?
Are we, as a society, up to the challenge? Are we teaching young minds to move fluidly across science, mathematics, statistics, and philosophy, or are we producing technicians who are narrowly confined to work within a specialty and cannot see the forest for the trees? Are we creating an environment where young people are encouraged to look beyond the buzzwords of the moment, like “big data,” and turn their attention to genuine knowledge?
The problem was articulated by Albert Einstein in a letter to Robert A. Thornton as far back as 1944, when education and research were far more rigorous:
So many people today—and even professional scientists—seem to me like somebody who has seen thousands of trees but has never seen a forest. A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering.
Check the PhD theses, scientific literature, and federal funding to see how much effort is independent from the prejudices of our day. Are we driven by the demands of knowledge, as were Newton, Kant, and Einstein? Or do we march to the beat of bureaucrats whose motivations have little or nothing to do with knowledge? Do we really care about humanity? If we do, then we will take to heart the closing words of Richard Feynman from his 1974 Caltech commencement address:
So I have just one wish for you—the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom.
The fundamental issue for climate science, as well as many other sciences, is the epistemology of complex systems. Arguing about whether or not this or that observation validates or invalidates a theory is absurd when, as Tibaldi and Knutti correctly state, “projections . . . cannot be validated directly through observed changes.” It is equally absurd for people to claim that their belief in global warming is grounded in science. Instead of all the chatter, we should put forth great effort at formulating a viable epistemology within which the “truthfulness” of uncertain models can be characterized.
Can it be done? Of course. Will it be done? This is a choice to be made—and the future of humankind may depend upon it.
Edward Dougherty is Distinguished Professor of Electrical and Computer Engineering at Texas A&M University and Scientific Director of the Center for Bioinformatics and Genomic Systems Engineering.