An economist is someone who hears that something is working in reality, but before believing it, wants to know if it works in theory. As I tell students in my economics classes, learning to think like an economist is learning to think in terms of models. You may want to think about reality, but reality is far too complicated for our brains.

Think about maps. If you wanted a perfect map of your hometown—an absolutely perfect map, which contained everything—how large would the map need to be? The same size as your town. That wouldn’t be a very useful map. We build maps to answer questions; different maps answer different questions. A good map is one that gets rid of the unnecessary information, without omitting anything necessary to the purpose.

Models are like maps. They take out lots of details in order to allow us to think about how things actually work. We have to build models, because our brains are not equipped to think through incredibly complicated systems. Models are useful when they help us think, but if the model is either too simple or too complex, then it is less useful.

Understanding how models work is really important for understanding how to make decisions that affect entire economies. When you see people forecasting the future, they are using models. When you see someone advocating a preferred policy to solve a problem, there is a model underneath the argument. One’s first instinct when one hears people talk like this should be to ask what model is being used. Is it a good model?

Start your day with Public Discourse

Sign up and get our daily essays sent straight to your inbox.

The Coronavirus Models

It has been obvious from the coverage of the coronavirus pandemic that most people simply do not understand the nature of models. This is not surprising. Models are messy things, and few people have spent much time thinking about how they work.

There are two parts to a model. First, there is the structure of the model, which is a bunch of mathematical equations that show how one thing is connected to another. The number of equations and the mathematical complexity of the equations can both be quite high, but fundamentally the structure of a model is just like those two-equation, two-unknown problems you learned to solve in your algebra class. Second, there are the data you use in the model. If you have bad data, then the model will not be useful, no matter how carefully it is constructed. Again, your algebra class gives an example: if you know that Y is twice as large as X, but you don’t know the size of X, then the model is not terribly useful in telling you the size of Y.

The epidemiological models that are garnering so much attention in the coronavirus pandemic are not fundamentally different from economic models or weather forecasting models. They have a mathematical structure, and they use existing data to make predictions. Different epidemiological models give different results because they are built differently. Not surprisingly, modelers generally think their particular model is the best. Unfortunately, as of this date, the details of many of the models that are informing policy decisions have not been released to the public, so there is no way for others to evaluate the structures of the models themselves.

Perhaps even more importantly, the data that are being used are woefully incomplete. Consider the fatality rate numbers. To know the fatality rate, you have to know both the number of deaths and the number of infections. We have decent, but not perfect, data on the number of deaths. We do not have anywhere near enough data to know the number of infections. Without widespread, random samples of the population, there is no way to know that number. You can get the same number of deaths with high levels of infections and a low death rate or with low numbers of infections and a high death rate. Which of those scenarios is accurate? The answer has enormous implications for how easy it is to spread the disease in daily interaction.

A model that uses messy data will not give a precise answer. Instead, it will give a range—possibly quite large—of potential outcomes. There is an instinctive reaction in the press to latch onto the most alarmist numbers. We saw this with the Imperial College model, which has been the most influential model in this crisis. Headlines blared that the model predicted that, if nothing was done, there would be half a million deaths in the UK and over two million deaths in the US. A short time later, there was a follow-up announcement that the same model was predicting the number of deaths in the UK would be around 20,000. Much hand-wringing ensued. The real problem here is not that a model gave a different answer as we acquired better data and people’s behavior changed. The real problem was the media’s sensationalism about the upper end of predictions from a model built over thirteen years ago to predict flu pandemics.

What If the Model Is Bad?

People will disagree when they are using different models. Figuring out the differences between the models goes a long way toward understanding the disagreement. If the models being used are all good models, it may be difficult to determine which one should be used. Reasonable people will disagree.

But what if someone is using a bad model? What if someone is using a model that is manifestly wrong? Imagine trying to navigate a city you have never visited by using a map of your hometown. Is it possible that decision-makers could use a model that is as wrong as a map of one location that was used to travel through an entirely different location?

Unfortunately, yes, it is possible. In fact, it has happened, and the consequences aren’t pretty. How bad could it be? Think “Great Depression” bad. Thomas Humphrey and Richard Timberlake’s book, Gold, the Real Bills Doctrine, and the Fed: Sources of Monetary Disorder, 1922–1936, is a discussion of a time when the decision-makers at the Fed were horrifically wrong. The mistake? Federal Reserve officials believed a theory, the real bills doctrine, was true. Now, unless you have studied monetary theory, you have probably never heard of the real bills doctrine. Don’t feel bad about that; the theory is wrong, just plain wrong. Fed officials should have known better; the fact that the theory is an economic fallacy was known at the time.

The result? When banks started failing in the early 1930s, the Fed just watched. After all, the banks had been making types of loans which the real bills doctrine forbade. So, the Fed thought it was a good thing that these banks failed, because it would have a cleansing effect on the system. Unfortunately, when the banks collapsed, the money supply collapsed. A bad model led to a bad monetary policy, and the result was 25-percent unemployment.

Is this an isolated case? Not at all. It is actually depressingly common to see policy experts or government officials rely on painfully crude forecasting models. To see how this works, consider weather forecasting. Suppose we wanted to predict the temperature on a date in the future, say, July 4 of next year. Not surprisingly, a really good prediction for July 4 is that the temperature will be roughly the same as it is on July 3.

Now, let’s abuse our temperature prediction model. The weather on November 4 is expected to be the same as that on November 3, which is expected to be the same as the weather on November 2, which is expected to be . . . the same as the weather on July 3. We have just turned a good model for predicting tomorrow’s weather into a really bad model for predicting the weather in the distant future. Oftentimes when you see someone show a graph of something to date and then just draw a line extending out from there, they are using exactly this sort of model. The problem is even worse when what is being predicted is people’s behavior. In the face of change, people’s behavior changes. Forecasting models that do not account for changes in human behavior are really awful. This, by the way, was the lesson of the 1970s. The macroeconomic models from the 1960s paid far too little attention to changes in behavior.

Costs and Benefits of the Shutdown

Using a bad model may lead to significant undesirable consequences when you are setting national policy. People do not pay enough attention to the problem of bad models influencing decisions. For example, people may see a real problem in society, then someone may use the wrong model to forecast the future, and then use that model to design policy. The problem may be real, and the policy solution may even be a good idea, but without actually figuring out if the model itself is any good, then the debate over the policy is just posturing.

One way to view the debate that we are currently witnessing, over the costs and benefits of the shutdown of the whole economy, is that it is a clash of models. On the one side, we have epidemiological models, which are forecasting the harm from the virus. On the other side we have the economic models, which are forecasting the harm from attempts to control the virus. What we do not have, because nobody has had time to build it, is a model that combines these two things into one larger model that would allow us to think through the combined effects.

However, even such a supermodel would be missing something crucial. Epidemiological models and economic models that are used to forecast the future have something in common: they both model things that are measurable. Not everything can be measured, however. Indeed, much of what makes a society a good society is very difficult to quantify. What is the value of human companionship? or of gathering together on Easter? What is the value of knowing that your elderly parents are safe from a highly infectious disease? These sorts of things are crucial to a good society, but there is no way to put either one into the epidemiological models or the economic models that we use to forecast. As with many things, the part that is measurable may not be the most important thing. As a result, the debate between those who insist that we need to follow the advice of the epidemiologists, and those who insist that we ought to prioritize the economic effects of that advice, may actually be a proxy war for figuring out what constitute the most important things are in a society.