From Facebook to Amazon, Google, and Apple, large corporations increasingly gather huge amounts of data on their users, then use that data to predict behavior, tailor the content to display, or suggest products for purchase. But the use of such complex calculations is not limited to private corporations.

There are increasing calls from prominent individuals to put faith in the power of machine learning to determine national policy. In November 2016, for example, Jack Ma, founder of retail giant Alibaba, stated:

Over the past 100 years, we have come to believe that the market economy is the best system, but in my opinion, there will be a significant change in the next three decades, and the planned economy will become increasingly big. Why? Because with access to all kinds of data, we may be able to find the invisible hand of the market. The planned economy I am talking about is not the same as the one used by the Soviet Union or at the beginning of the founding of the People’s Republic of China. The biggest difference between the market economy and planned economy is that the former has the invisible hand of market forces. In the era of big data, the abilities of human beings in obtaining and processing data are greater than you can imagine. With the help of artificial intelligence or multiple intelligence, our perception of the world will be elevated to a new level. As such, big data will make the market smarter and make it possible to plan and predict market forces so as to allow us to finally achieve a planned economy.

The term “Artificial Intelligence” (AI) describes computer programs that can model the environment of their assigned tasks and act autonomously to maximize their chances of succeeding in those tasks. In particular, “Machine Learning” (ML), a variety of AI using specialized algorithms to enable learning from mistakes and new data, has flourished in the last fifty years. It has recently begun to be employed in the fields of economics, social science, and public policy.

Start your day with Public Discourse

Sign up and get our daily essays sent straight to your inbox.

The increasing sophistication and competence of machine learning in these fields has given public policy analysts misguided confidence in the ability of machine learning-aided economic policy to substitute for human decisions. The truth is, these new methods repeat the errors of previous attempts to automate and centralize economies. Although machine learning demonstrates an impressive capacity to solve complex analytical problems, it only finds associations rather than meaningful causal relationships, and it is unable to overcome fundamental information incentive problems that the free market adequately solves. In other words, in spite of Mr. Ma’s optimism, artificial intelligence will simply never be smart enough to replace the free market.

The truth is, these new methods repeat the errors of previous attempts to automate and centralize economies.

 

The Power of Machine Learning

Artificial intelligence and machine learning are powerful tools for recognizing patterns with remarkable accuracy. Imagine, for instance, that we try to predict credit card fraud using dozens of data points from credit card transactions (e.g., the time of day and week when the transaction occurred, the item purchased and its price, the use of the credit card in the previous twenty-four hours, etc.).

In this case, ML is helpful, because we don’t need to know which characteristics of transactions are relevant, nor what kind of relationship exists between the datapoints. If we have access to millions of credit card transactions and the knowledge of whether the transactions were fraudulent, we can “train” the model of the relationship, called the “functional form,” to fit past patterns to new data and detect, often with fantastic success, whether a new transaction is fraudulent. At a fundamental level, there is nothing very “intelligent” here: it is an exercise in massive data fitting. What is intelligent is that the process is highly automated and, therefore, easily scalable to multiple environments.

There are three main reasons why machine learning is currently in vogue. First, our computers have become much more powerful, capable of performing the data analysis that, for decades, academics had only theorized about. Secondly, thanks to the internet and cheap computing, we now have access to much larger and comprehensive datasets. In economics, for example, a typical empirical paper had a few hundred observations in the 1970s. Today, it is common to see papers with tens of millions of observations. That is why AI and ML are often associated with expressions such as “big data” or “data analytics.” Finally, computer scientists and applied mathematicians have made huge advances in the efficiency of machine learning techniques that, in concert with growing computational power and datasets, have increased the feasibility of data analysis techniques that were previously only theoretical.

In order to harness the power of machine learning, incredibly large datasets are required. The rule of thumb in the industry is that one needs around a million labeled observations to train a network. These enormous datasets have arisen from two primary situations. First, large firms like Netflix and Amazon gather a lot of information from customer activity, allowing them to predict what else you might want to watch or purchase. Secondly, new techniques generate all possible permutations of data observations and identify relationships within these data that we can compare to data we find in the real world.

Public Policy and the Limits of Machine Learning

Unfortunately, in many areas of public policy, we do not have access to such a wealth of data and, most likely, we never will. Take the example of setting monetary policy, i.e., the regulation of how much money to supply, typically conducted by national central banks, such as the Federal Reserve in the U.S.

Monetary policy is a relatively straightforward topic with fewer moving parts than other kinds of economic policy. Could machine learning ever replace the Federal Market Open Committee (FOMC, the main policy instrument of the Federal Reserve System)? I am skeptical. For one thing, the FOMC usually has a limited amount of data. In the United States, we only have reliable data for output, consumption, and investment after World War II and, even then, only at a quarterly level. If we count them, from 1947:Q1 (the first “good” observation in terms of the accuracy of our measurement) until 2021:Q2 (the last observation as I write this), we have 298 data points. This is far fewer than acceptable for machine learning techniques.

Furthermore, the US economy has radically changed, which limits the relevance of older data. We have moved from an economy dominated by manufacturing into an economy driven by services, and financial innovations have transformed the relationship between financial and real variables. The evolving structure of the economy shifts the relationships between the data points, making it harder for machine learning to find clear patterns. These structural changes mean that econometricians often do not use observations before the early 1980s when they estimate the effects of monetary policy on output. In fact, such estimates change sharply depending on whether we include early observations. Moreover, the economy is bound to continue to change, meaning we will continue to have to deal with newer and newer data.

Employing individual data (such as consumption data of households or financial transactions) can help us get more observations, but we will still encounter similar problems. Ponder, for example: how informative are the consumption patterns of married couples in the 1990s, in their early 40s, with several kids at home, about the consumption patterns of single individuals in the 2020s, also in their early 40s, without kids? Moreover, there are severe limitations on what individual data can teach us in the absence of detailed explanations about the nature of relationships between different data points.

This additional problem with using microdata is an instance of the Lucas critique, named after Robert Lucas, one of the most influential economists of the last century. The essence of the critique is that it is difficult, if not impossible, to distinguish in data how much behavior is attributable to unobserved characteristics or to the impact of particular public policies.

Machine learning faces the same problem that economists have faced for a century: distinguishing causation and correlation. Moreover, the answer that machine learning provides is valid only under a constant set of circumstances: changes in policy may affect different individuals differently, which gives misleading results about the impact of a policy.

If an airline tightens its rules for upgrades in a way that implicitly favors business-class travelers with few but expensive trips, this type of traveler will react differently to the change than business class travelers who make more regular, shorter trips. Machine learning, which operates without a theory of the decision-making that underlies external behavior, will only identify associations between rule changes and purchase activity, rather than true customer preferences. Moreover, experimentation with different policies to observe greater variation in behavior is often infeasible or even immoral. An airline cannot sporadically alter its upgrade rules unless it wants to alienate its customers, just as we cannot perform experiments with national monetary policy without risking wild economic fluctuations.

Secondly, experimenting on a sample will only allow machine learning to give an explanation for individuals in that sample. For example, administering a lottery for applicants to charter schools will only be able to tell us about the impact of charter schools on those who have applied, and not about the general population, many of whom did not apply. Furthermore, firms nevertheless often enjoy greater scope for experimentation than national governments; imperfect even for many firms, machine learning is all but unavailable to national governments.

The free market is not perfect, and its outcomes are often unsatisfactory. Nevertheless, like democracy, all the other alternatives, including “digital socialism,” are worse.

 

Free Markets, Not “Digital Socialism”

The fundamental problem with relying on machine learning to recognize economic trends and determine economic policy coincides with the reasons that Friedrich Hayek gave against conventional, socialist central planning. The objections to central planning are not that solving the associated optimization problem is extremely complex, although it is. If that were the only problem, AI and ML could perhaps help us solve the problem. The objections to central planning are that the information planners need is dispersed and, in the absence of a market system, agents will never have the right incentives to reveal it or to create new information through entrepreneurial and innovative activity.

A simple, real-life application of central planning illustrates this point. Every year, the department of economics at the University of Pennsylvania faces the challenge of setting up a teaching distribution for the next academic year. Each member of the faculty submits her preferences in terms of courses to be taught, day of week, time of day, etc. The computational burden of finding the optimal allocation is quite manageable. We have around thirty-two faculty members. Once you consider that certain professors have particular specialties, the permutations to consider are limited. A few hours in front of Excel deliver the answer: it seems a central planner at Penn Economics can do her job.

The real challenge is that, when I submit my teaching requests, I do not have an incentive to reveal the truth about my preferences nor to think too hard about developing a new course that students might enjoy. I might not mind teaching a large undergraduate session on a brand-new hot topic and, if I am a good instructor, the students will be better off. However, I will not be compensated for the extra effort, even if that extra effort is minimal, and thus I will have an incentive to request a small section for advanced undergrads on an old-fashioned topic. This outcome is not optimal. If the Dean could, for instance, pay me an extra stipend, I would teach the large, innovative section, the students would be happier, and I would be wealthier. Even this alternative, however, would be subject to more sophisticated strategic information manipulation as my colleagues and I vie to optimize our course loads according to our personal preferences.

The only reliable method we have found to aggregate those preferences, abilities, and efforts is the free market, because it aligns, through the price system, incentives with information revelation. This method is not perfect, and its outcomes are often unsatisfactory. Nevertheless, like democracy, all the other alternatives, including “digital socialism,” are worse. By and large, we should still rely on markets to allocate resources.

Markets work when we implement simple rules, such as first possession, voluntary exchange, and pacta sunt servanda (“agreements must be kept”). We did not come up with these simple rules thanks to an enlightened legislator or a blue-ribbon committee of academics “with a plan.” On the contrary, these simple rules were the product of an evolutionary process. Roman law, the common law, and the lex mercatoria of medieval merchants were bodies of norms that gradually appeared over centuries, thanks to the decisions of thousands and thousands of agents. The forces of evolution and trial and error produced the optimal solution to what economists call a mechanism design problem. Those who believe machine learning can do the same will be sorely disappointed.