By now, you’ve probably seen a lot of coverage of Sam Bankman-Fried, the collapse of his cryptocurrency FTX, and his potentially fraudulent behavior along the way. SBF belongs to a cohort of young, highly influential entrepreneurs inspired by effective altruism, which is a self-described “research field and practical community.” EA is one of those rare (and sometimes bizarre) academic theories that has managed to escape from the walls of the ivory tower and influence powerful people and institutions: tech billionaires like SBF, philanthropists, many nonprofits.
So what is effective altruism? One of its core goals is to make sure that charitable funds and resources are being used as efficiently and effectively as possible, hence the qualifier “effective.” Another EA tenet is that, because every human being is of equal worth, geography should not in principle limit our giving, though in practice it must—we can’t neglect our children to meet the needs of someone in a different continent. But practical constraints aside, giving shouldn’t discriminate based on geography, according to EA. Nor should philanthropic efforts be temporally bound: future lives are just as valuable as present ones, after all. So EA has become tied to different kinds of longtermism, some much weirder than others. Some futurists are concerned with good old-fashioned climate change, and others aspire to help future generations settle on Mars.
Radical fringes notwithstanding, there’s a lot to commend about EA. It endorses good stewardship of resources; it recognizes the dignity of every human being; and it pushes back against the kind of presentism that disregards generations to come. This last feature of EA is something that ought to resonate with conservatives: it calls to mind Edmund Burke’s “eternal society,” which suggests individuals and societies should make their choices with future generations’ welfare in mind. Some healthy versions of EA seem possible, and some Christians have gotten behind it.
But some iterations of EA should give us pause. Certain adherents felt ethically driven to pursue a high earning career in tech or finance and use their wealth for charitable spending. But should we really trust the judgments of quirky billionaires about what’s good? And EA becomes decidedly un-Burkean—and undesirable for that matter—when it intersects with transhuman futurism.
The core defect of some EA thinking seems to be its tie to utilitarianism. Utilitarianism, a type of consequentialism, advances this moral imperative: maximize the good for the greatest number of people. It offers a formula for calculating right action. The correct moral choice for a utilitarian is whatever gives maximal happiness for the maximal number of people.
Utilitarianism is ultimately untenable as a philosophy, in my view. One problem seems to be that it can’t always discern when the means or motives for doing good have become corrupt. And it gets even more muddled when applied to modern circumstances, thanks to our technological capabilities. The globe is far more interconnected than it ever has been, so everyone knows more about each other’s material needs than they ever have. And with multiple charities for every need and every place, how can individuals possibly choose the best way to maximize their time and talent under a utilitarian framework? Utilitarianism’s demands on individuals are so absolute and unyielding that some EA proponents admit that embracing functional “hypocrisy” is necessary.
But I suspect most people would rather not subscribe to a moral framework that requires hypocrisy and that is so out of step with daily life. After SBF’s downfall, one major EA donor tweeted, “We likely need to act a little more like deontologists and virtue ethicists. There is much to be said for clearly articulated rules!” Luckily for him, Public Discourse’s archives are replete with deontology and virtue ethics. Here, I suggest three essays for those interested in finding a better ethical framework than EA.
First, Robert George explains why utilitarianism doesn’t work. Utilitarianism suggests all goods exist on the same plane as one another and therefore can be ranked, then added up to calculate what action would produce the greatest good for the greatest number. But, as George explains, human goods are variegated and can’t be ranked. He argues that we “ought to choose those options, and only those options, that are compatible with the human good considered integrally—that is to say, with an open-hearted love of the good of human persons considered in all of its variegated dimensions.”
Second, I recommend Christopher Tollefson’s 2013 essay, “Charity with a Conscience,” which is a response to an op-ed written by Peter Singer (who, by the way, is a leading proponent of EA). Singer had argued that charitable giving should go exclusively to “basic needs” rather than arts, music, or other cultural endeavors. Tollefson’s response is: “Any human good, in fact, generates genuine needs, and thus charitable giving may be oriented to the satisfaction of needs generated by all the basic human goods: knowledge, aesthetic experience, religion, marriage, play, and others.” Utilitarianism, it turns out, smuggles in the belief that material goods like health are better than cultural or social ones like art and friendship.
Finally, in a review of John Mueller’s Redeeming Economics, Ryan Anderson draws attention to a helpful distinction between benevolence and beneficence, which can guide us in thinking about what we owe and to whom. He writes: “Because our goods are scarce resources, we cannot be beneficent with everyone; we have to prioritize certain people (ourselves, our families, our immediate neighbors) to be the objects of our economic actions. . . . At the same time, we can be benevolent to all by respecting their wellbeing in refraining from causing harm (in economic-speak, in refraining from giving them a negative value on the distribution scale).”
I hope these essays offer better grounding for effective altruists in search of it, and, for everyone else, moral clarity amid perilous confusions.