Public Discourse http://www.thepublicdiscourse.com The online journal of The Witherspoon Institute Wed, 02 Sep 2015 11:00:16 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.4 Is Lying Ever Justified? http://www.thepublicdiscourse.com/2015/09/15606/ http://www.thepublicdiscourse.com/2015/09/15606/#comments Wed, 02 Sep 2015 11:00:16 +0000 http://www.thepublicdiscourse.com/?p=15606

In a series of explosive videos, The Center for Medical Progress has exposed the abuses, lawbreaking, and corruption of Planned Parenthood, including the selling of fetal body parts and the alteration of abortion procedures so as not to crush fetal body parts for sale. Similar undercover work by Live Action stimulated an extended debate in the pages of Public Discourse and elsewhere over the question: Is lying ever justified?

Christopher O. Tollefsen, who first initiated the conversation in Public Discourse, has now published Lying and Christian Ethics, a terrific book defending the absolutist view that it is never morally acceptable to lie, no matter the circumstances. Tollefsen has caused me to seriously reconsider my own earlier stated position on these issues, and for that I am grateful.

Lying and Personal Integrity

Tollefsen sets out the Christian case against lying—understood as assertion contrary to belief— as found in Augustine and in Aquinas. Augustine provided a forceful denunciation of all false assertion with the intention of deception for any reason as incompatible with love for God who is the Truth. The bishop of Hippo also anticipated and responded to numerous objections raised against the absolutist view. Tollefsen’s summary of Augustine’s views systematizes the great African saint’s thought in a powerful and persuasive way. In Tollefesen’s reformulation, Thomas Aquinas provides an even more philosophically rigorous case against lying as a violation of the basic human goods of personal integrity and sociality.

And so Tollefsen’s argument ultimately is a philosophical one applicable to all humans, not just a uniquely Christian argument. The heart of Tollefsen’s philosophical case against false assertion is that it always violates the basic human goods of truth, religion, sociality, and integrity. Often, Tollefsen rightly points out, false assertion is also an act of injustice in which those with a right to the truth are deprived of what was their due. While many intrinsically evil acts are violations of justice, not all intrinsically evil acts are wrong for this reason. So, lies to those who do not deserve the truth would not be contrary to justice, but still would be wrong as undermining the goods of truth, religion, sociality, and integrity.

Tollefsen argues that to love the truth wholly and unconditionally is to set oneself against falsehood wholly and unconditionally. But to lie is to fail to shun falsehood in all respects. To lie is to act, therefore, against the good of truth. If God is Truth, then the good of religion is also involved. “All truth,” writes Tollefsen, “is both of God, as a source, and like God, as an image.” So love for God gives rise to love for the truth which comes from God, and abhorrence of falsehood. Tollefsen also offers a case that lying is an act against the social good inasmuch as it undermines the potential for community. All truthful communication is an act of love inasmuch as love consists in a union between lover and beloved. In truthful communication, a unity comes into existence between the mind of the communicator and the mind of the interlocutor. To lie is to reject the union that could have existed in truthful communication, and therefore at least partially to reject the social good that could have been instantiated in truthful assertion.

Perhaps the most powerful part of Tollefsen’s case against false assertion is his account of the good of personal integrity, an inner harmony in a person’s judgments, choices, and actions. An agent with personal integrity judges that something is the case or ought to be done, chooses in accordance with that judgment, carries out that choice in a completed human action, and experiences emotions in accordance with prior judgment, choice, and action. Personal integrity consists in an inner unity of the person in the practical order of judgment, choice, action, and emotion.

Tollefsen argues that personal integrity so understood—to be authentic, consistent, and harmonious with oneself—is a basic good, valuable in itself as an aspect of human fulfillment. By contrast, cognitive dissonance, weakness of will, double-mindedness, and inner discord characterize the person lacking personal integrity. Although personal integrity is intrinsically and not just instrumentally valuable, this good has profound social consequences. Simply put, social harmony cannot exist without personal integrity. If people lack personal integrity, they will not be able to cooperate in the social order. Agents divided within themselves cannot be harmoniously united with other people.

Tollefsen argues persuasively and powerfully that false assertion violates the good of personal integrity. The one who asserts falsely has a judgment that something is the case (I have lived in Princeton) but chooses to communicate in a way that contradicts that judgment by asserting, “I have never lived in Princeton.” To assert falsely is to commit an act of self-induced practical schizophrenia. The one who asserts falsely intentionally creates a lack of harmony between the inner believing self and the outer communicating self. Thus, the one who asserts falsely intentionally damages the good of personal integrity. If it is intrinsically evil to act against basic human goods, then false assertion should never be the means chosen even to secure important social or political goods or to avoid evils, including death.

Tollefsen’s reflections on the good of personal integrity merit rereading and serious consideration. He is surely right that human action cannot be properly understood simply in terms of the effects that take place outside the agent. To assert falsely undermines something important in in the one who communicates falsely.

What is Lying?

But let’s take a step back. Lying and Christian Ethics begins by addressing the question, “What is lying?” In the Catholic tradition, we find at least three prominent definitions.

St. Thomas Aquinas’s definition covers the broadest number of cases, defining a lie as any assertion contrary to the mind. St. Augustine, in his two great works against lying, Contra mendacium and De mendacio, includes the intention of deception in addition to false assertion. Covering still fewer cases is the definition first proposed by Hugo Grotius and found in the work of many nineteenth- and twentieth-century Catholic theologians. It is this third definition that is found in the first (but not the authoritative revised) edition of the Catechism of the Catholic Church: “To lie is to speak or act against the truth in order to lead into error someone who has the right to know the truth.”

Tollefsen correctly notes that, over the centuries, the majority of Catholic theologians have not endorsed the Grotian definition. But Tollefsen overstates his case a bit when he asserts that “the Catholic tradition is against the Grotian definition” of lying. The history of Catholic theological opinion is rich both in terms of how a lie is defined and in terms of whether false assertion is considered always intrinsically evil. It is precisely this diversity of opinion—even among canonized saints!—that has led to the controversies about the tactics of The Center for Medical Progress and Live Action. The historical sections of Lying and Christian Ethics would have been strengthened considerably by an engagement with perhaps the most detailed treatment of lying in the Catholic tradition, Gregor Müller’s magisterial Die Wahrhaftigkeitspflicht und Die Problematik der Lüge.

Such quibbles aside, Tollefsen’s views on these topics are in general historically credible. In any case, Tollefsen’s philosophical case for the proposition “assertion contrary to belief is ethically impermissible” does not depend on how lying has been defined by various theologians, nor upon any particular definition of what constitutes a lie. Whatever the theological history, and however we define the term “lie,” we can always raise the philosophical question, “is false assertion ethically impermissible?”

Although Tollefsen ultimately rejects their conclusions, he provides an extended consideration of the Christian case for lying as found in the writings of Cassian, Bonhoeffer, and Niebuhr. Tollefsen is not attempting to provide a systematic overview of all the major Christian views of what a lie is and whether lying is intrinsically evil. Still, this section could have been strengthened if Tollefsen had also analyzed the arguments of St. Clement of Alexandria, St. John Chrysostom, St. Hilary, St. John Climacus, and others who defended false assertions in various contexts. It may well be that the representatives treated by Tollefsen provide the most powerful Christian case against the view that false assertion is always wrong, but a further engagement with the Catholic sources could have made this chapter even stronger. Still, Tollefsen is to be commended for considering at such length, and without straw-man caricature, the Christian case against his view.

Surprisingly, a central Christian argument against false assertion is unmentioned in Tollefsen’s otherwise admirably thorough book. If false assertion is permissible in order to save innocent human life, then the actions of innumerable martyrs are deeply problematic. Many of them could have avoided death by false assertions such such as, “I am not a Christian” or “I recognize Henry VIII as the head of the Church in England.” Given the duty to preserve and protect innocent lives (including our own), if false assertion is not wrong, these Christian martyrs seemingly acted wrongly in not choosing permissible means to avoid both their own deaths and another’s sin in committing murder.

Personal Integrity and Double Effect

One might wonder whether undermining the good of personal integrity is intrinsically evil. We permit—indeed, we view as heroically generous—the donation of a kidney in order to save another’s life. In kidney donation, the agent intentionally chooses a personal disintegration of well-being, a lack of natural organic integrity, at the physical level. In order to save another’s life or our own, may we not also choose a personal disintegration of our well-being at a psychological level?

But the removal of a kidney is not an intentional undermining of the good of health. The removal of a single kidney for donation is permissible because the good of health, adequate bodily function, is maintained. Removal of the entire liver or healthy heart would not be permissible, since this removal undermines the good of health. In a similar way, false assertion undermines the good of personal integrity, which is intentionally damaged in false assertion rather than suffering damage as a side-effect.

Appealing to double effect reasoning, some people defend lying as akin to killing in self-defense. However, Tollefsen correctly points out that “killing” is not as such a properly described moral act. Killing is a term that can properly describe a variety of ethically diverse acts: first-degree murder, negligent homicide, accidental manslaughter, as well as unknowing and unwilling causing of death. By contrast, false assertion is an intentional human action, rather than simply a physical event such as killing. Unlike killing, false assertion, that is, assertion contrary to the mind, is always a human action and can never be done entirely by accident. We cannot come to a moral judgment about the rightness or wrongness of an act unless that act is properly described, described not just as a physical event but as human action.

What about intentionally dying to save another’s life (say, throwing oneself on a grenade, submitting to crucifixion by Roman soldiers, etc)? Such actions undoubtedly undermine the good of life, yet Christians praise such self-sacrificial love of the other.

These cases are not properly described as intentional killing. The soldier who jumps on a grenade to save his friends is not choosing to kill himself as a means to saving his friends. Rather, he foresees and accepts his own death because he intends to protect his friends from the blast. Similarly, when Christ submits himself to crucifixion by Roman soldiers, he is allowing himself to suffer death as a side-effect of living in perfect love for all human beings and in perfect obedience to the Father.

On Tollefsen’s view, perfect love for all human beings and perfect obedience to the Father also enjoin us to never assert falsely. Lying and Christian Ethics provides a powerful case for the thesis that false assertion violates the goods of personal integrity (love of self), sociality (love of neighbor), religion and truth (both pertaining to obedience to and love of God). Readers inclined to think lying is sometimes justified owe it to themselves to read this book.

Christopher Kaczor is Professor of Philosophy at Loyola Marymount University and the author of The Ethics of Abortion: Women’s Rights, Human Life, and the Question of Justice, 2nd edition (2015) and The Gospel of Happiness (2015).

]]>
http://www.thepublicdiscourse.com/2015/09/15606/feed/ 0
The Social Injustice Done to Adjunct Faculty: A Call to Arms http://www.thepublicdiscourse.com/2015/09/14452/ http://www.thepublicdiscourse.com/2015/09/14452/#comments Tue, 01 Sep 2015 11:00:53 +0000 http://www.thepublicdiscourse.com/?p=14452

It’s August, and many of America’s teens are headed back to college. This means that not a few parents will be left with that empty feeling in the pit of their stomach—not only because their beloved children are leaving the nest but because the bills to pay for their children's new college homes are coming due. According to the College Board, tuition, fees, room and board in private four-year universities last year averaged $42,419. That was up $1,464 from the previous year. Was your pay raise that high? Parents might be left wondering where all the money goes. Are all these faculty members getting rich?

In an earlier Public Discourse essay, I showed that tuition at American colleges and universities has been rising six times faster than inflation and several times faster than health-care costs, which has forced students to take on ever-increasing levels of debt to pay for their education. I also documented how most of those increases have gone to the support of ever-expanding university bureaucracies and to the salaries of upper-level administrators.

Many colleges and universities have engaged in a frenzy of building, competing to draw in students with lavish research centers, luxurious dormitories, and top-notch recreation facilities. All of these elaborate perks are meant to justify the equally elaborate sums of money students are being asked to fork over to attend such a “prestigious” and “elite” institution. Yet many of these colleges and universities have spent well in excess of their revenues and have had to borrow heavily to pay construction costs. Overall debt levels more than doubled from 2000 to 2011, according to inflation-adjusted data from Moody’s. In the same period, colleges’ cash, pledged gifts, and investments declined more than 40 percent relative to the amount they owe. A study released by Bain & Company and Sterling Partners, a private equity firm, found that long-term debt at US nonprofit colleges and universities grew 12 percent a year from 2002 to 2008, while interest costs increased 9 percent.

Feeling the crunch, administrators have tried to justify their salaries by cutting costs. They have targeted the people who are weakest and least competitive in the job market: the college’s bottom-level workforce. This level includes not just the janitorial staff, whose members have at least a minimal chance of being represented by a union, but also adjunct faculty, who currently possess absolutely no chance of the benefits of union representation.

The Role of Adjunct Faculty

Today, adjunct instructors make up half or more of all faculty. There is, of course, a legitimate role for such faculty. The category was created to cover those outside the academy who might come in to share their expertise in a special course—say, for example, a marketing executive who comes in to teach a business school course on marketing. These people aren’t looking to achieve a tenured academic position, but they are “faculty” nonetheless.

What such people are generally paid is what we might call an “honorarium” rather than a salary. We can’t really afford to pay the high-level executive what she earns at her regular job, but we feel it “honors” her to bed paid something. The justification for not paying them benefits is due to the presumption that they have benefits (and usually better benefits) through their full-time jobs, and so offering them employee benefits such as health insurance (as opposed to, say, free parking, use of the library, and access to the gym) would be superfluous.

The kind of “adjunct” faculty we’re discussing now, however, are not in this category. Most of the adjunct faculty that now make up more than half of higher education faculty are not “honored” members of the community who have come into the university to provide students with the benefits of their practical experience. They are hired at poverty-level wages with no health-care benefits and no guarantee of continued employment from semester to semester.

How badly are these adjuncts paid? It varies a bit from institution to institution, but the going rate is somewhere around three to four thousand dollars per course. Even if such an instructor were to be allowed to teach three courses per semester (which would be very rare), he or she would be earning only about $9,000 (pre-tax) per semester, or $18,000 per year—without health-care benefits.

How Much Do Universities Profit from Adjunct Labor?

How little is this compared to what the university earns from the faculty member’s labor? Allow me to take an example from a fairly standard, private midwestern university. At this institution, a full-time student (taking twelve to eighteen credits per semester) would pay $37,350 in tuition for the 2014-2015 academic year. Assuming the norm of fifteen credits, that would be $1,245 per credit hour or $3,735 per class. For twelve credits, that means that each student would be paying $1,556.25 per credit hour or $4,668.75 per class. At this same institution, the per-credit-hour rate for part-time students is $1,305 per credit hour, or $3,915 for a three-credit class. An adjunct faculty member at this institution is paid $3,000 per semester to teach a three-credit class, without benefits.

So, even for the student who is paying the least amount per credit hour (the full-time student, who pays $3,112.50 for the course), the University is still receiving more from a single student than it pays the instructor.

At nearly all institutions, classes taught by adjunct faculty that do not have at least ten or fifteen students are cancelled. Thus, unless the university is able to take in at least $30,000 for a course they are paying $3,000 dollars for—that is, $27,000 in excess of the amount they are paying the instructor—they won’t do it.

A normal adjunct who teaches three classes per semester will average about seventy-five students in total, which means that the University earns a minimum of $233,437.50 per semester from that adjunct’s efforts. With ninety students, the university would make a profit of $277,125 per semester (over $500,000 per year). When I was an adjunct faculty member, I can remember a semester I had 150 students and earned $2,800 per course.

And how are these adjuncts treated within the community? Adjunct faculty members are usually the invisible men and women of modern academia.

At most institutions, when a “regular” faculty member dies, the entire institution is informed. If the institution has religious affiliations of any sort, there will usually be a large funeral or prayer service on campus. After the death of a longtime adjunct faculty member at the same midwestern university I was discussing above, no campus-wide notice went out. Indeed, if it hadn’t been for the students in the professor’s class going to campus ministry to ask whether something could be done, there probably would have been no public recognition of this man’s death at all. The chair of the department finally sent out a note to department members informing them that the students wanted “to have something in the way of memorial during their class,” and inviting them to pencil it into their calendars if they were so inclined.

Professional Associations Must Demand Justice

Without over-romanticizing the medieval guild—a very human institution that was susceptible to all the very usual human failings—when guilds were at their best, they did three things well that protected their members within the context of an uncertain and increasingly mercantilist economy. First, they set standards for the treatment of apprentices and set forth the basic requirements to be met for becoming a master. One was not automatically elevated to “master” status, but neither could one be kept at the level of a lowly apprentice forever. The guild also took care to control the apprentice system so that there were not too many apprentices for the jobs that were likely to be available within an area. Such rules helped, among other things, to protect those at the bottom of the guild hierarchy from abuse and set out for them a clear path to economic security and independence.

Let me suggest that one of the closest things we have to “guilds” in modern American society are the professional academic societies: the American Philosophical Association, the American Political Science Association, and a host of others across the academic disciplines. And in all three areas I’ve just set forth—sensible training and raising up of apprentices, care that there were not too many apprentices for possible positions, and protection of those at the bottom of the guild hierarchy from abuse—the modern academic associations have failed miserably.

Graduate students are allowed to tread water with their heads barely above the surface for year after year without appropriate guidance. There is often little or no thought among graduate faculty to the placement of their graduates in appropriate jobs, or whether in fact perhaps their own graduate program ought to close to help clear up the glut of excess graduate students on the market. Closing such graduate programs would of course necessitate two grievous sufferings that many senior faculty members are unwilling to undergo: the loss of the prestige that comes with teaching in a graduate program and the terrifying possibility of actually being required to teach freshmen in introductory-level courses.

It is crucial, in my view, that senior faculty show themselves willing to make the necessary sacrifices for justice rather than merely laying the blame solely at the feet of administrators. Senior faculty must shoulder their share of the blame.

Senior faculty must demand basic justice for those who are at the lowest end of the hierarchy and who are the weakest before the ever-increasing power of the corporate university establishment: the “invisible” men and women of the adjunct faculty. These academic guilds have been able to get themselves together to do all sorts of things—print journals, arrange conferences in expensive hotels in big cities, condemn apartheid, affirm global warming, decry racism—but somehow they never have had the time or will to vote for something that might involve “goring their own ox,” so to speak: namely, a nationwide strike among all the guilds of any and all institutions that do not agree to transition all adjunct faculty in the country who do not have full-time jobs elsewhere to “Instructor” or “Assistant Professor” rank, with a regular salary and health-care benefits.

The Time Has Come for a Nationwide Strike

Striking is seen as something blue-collar workers do, not people who sit in the book-lined offices of academia. I mean, elementary and high school teachers in public schools strike, not college professors. Let’s be honest: it’s a class thing. But we in academia have allied ourselves for too long with the wrong class. As much as we like to give lip service to helping those in the lower classes, we ignore those in our midst who do the work we’d prefer not to, from janitorial tasks to grading student papers and teaching non-majors.

It’s time we in academia leveled with each other. If senior faculty members don’t force the issue of justice for adjuncts, no one else will. Most administrators see the increasing use of adjuncts not as a problem, but as the solution to a problem. One might almost sympathize with them if they hadn’t exacerbated the situation by padding their own paychecks, adding ever more bureaucracy, and leveraging their institutions into higher and higher levels of debt with grandiose building projects. The salary of just one $250,000 Vice President of Something-or-Other and his staff of five would go a long way toward topping off the salaries of five or six underpaid adjuncts.

All this criticism of administrative bloat, while entirely appropriate, should not obscure another important point. Senior faculty must realize that they will have absolutely no credibility on any of their complaints about “corporate America” until and unless they force their institutions to do right by adjunct faculty. These members of our community must be paid a wage appropriate to their experience and level of training. Under no circumstances whatsoever are they to be left without health-care benefits. The time for justice for these “invisible” men and women of academia is now. The only question is how it can be done and how soon.

I don’t know which is worse: the fact that higher education corporate bureaucracies perpetrate this crime against the weakest and least competitive members of their workforce, or that the members of the university professoriate allow the practice to continue unabated in their midst, while enjoying the benefits and freedoms tenure provides.

Randall B. Smith is the Scanlan Professor of Theology at the University of St. Thomas in Houston, Texas.

]]>
http://www.thepublicdiscourse.com/2015/09/14452/feed/ 0
What’s Driving the Marriage Divide? http://www.thepublicdiscourse.com/2015/08/14792/ http://www.thepublicdiscourse.com/2015/08/14792/#comments Mon, 31 Aug 2015 11:00:36 +0000 http://www.thepublicdiscourse.com/?p=14792

There is a growing marriage divide in the United States. Marriage rates among lower-income and working class Americans have declined dramatically, and unwed childbearing has become the norm. However, among college-educated Americans, marriage is doing pretty well: most marry, their unwed childbearing rate has remained nearly as low as it was five decades ago, and they are the least likely to divorce.

This marriage divide is driving a wedge through society: in the upper-income third of the population, children are raised by their married parents, who have college educations. In the rest of the population, children are often born to single mothers with a high school education or less.

Unwed childbearing has long been common among those with the lowest income levels. Only recently has it become the norm among working-class, high-school-educated Americans as well. Not only does this trend leave a large proportion of America’s children at much higher risk of poverty, it also puts children at greater risk for outcomes that make them less likely to thrive. Children raised without their married mother and father are more likely to drop out of high school, go to jail, abuse alcohol and drugs, and become single parents themselves.

In Labor’s Love Lost, Johns Hopkins University professor Andrew J. Cherlin argues that this marriage divide has occurred because of rising income inequality, driven by decreasing wages among the working class since the 1970s. Cherlin overestimates the economy’s role in this marriage divide, however, underestimating the role of cultural changes. To combat the social ills driven by the decline in marriage, it is not enough to remove economic barriers to marriage. We must also rebuild our marriage culture.

Is It Really the Economy?

There is an ongoing argument about what has driven the decline in marriage and the rise in unwed births in working-class America. The left often cites economic decline, as Cherlin does, whereas the right emphasizes cultural changes that came with the sexual revolution of the 1960s.

Cherlin’s narrative goes like this. During the 1950s through mid-1970s, manufacturing jobs were plentiful. These jobs required minimal education while providing wages substantial enough for a single earner to support a family. When the manufacturing industry began to drop off in the 1970s, however, so too did good wages for working-class men. This decline of manufacturing jobs—along with changing cultural trends that made it more acceptable for couples to live together and to bear children outside of marriage—created a rapid decline in marriage and a subsequent jump in unwed births. Although he addresses cultural factors such as the sexual revolution and the birth control pill, Cherlin ultimately concludes that if job prospects were better for this group, unwed birth would be much less common.

Cherlin’s narrative is questionable for several reasons. For one thing, Americans are better off today than ever before. Wages for all Americans have increased since the 1970s. While the lowest- and highest-income groups in the US have seen the greatest wage growth, the average wage for working class Americans is still higher today than it was in the 1970s.

And it’s not just wages. On top of rising compensation, taxes have fallen substantially for most workers since the 1970s. Middle-class workers actually have 30 to 50 percent higher real, after-tax incomes than in the 1970s. Also, government benefits are higher today than they were four decades ago. In total, most working-class families have more resources available to them today than they did forty years ago.

Moreover, researchers have directly examined the thesis that reduced manufacturing employment reduces marriage rates. Sociologists at New York University recently studied how increased importation from China in the 2000s affected marriage in communities that produced competing products. The new competition had negative economic effects—but this did not affect marriage rates. This research is preliminary but casts serious doubt on the primacy of economic factors in the decline in marriage rates. If centuries of subsistence-level poverty did not destroy the two-parent family, it is hard to see why a late twentieth-century slowdown in the rate of compensation growth would.

Declining Marriage Rates Reduce Male Incomes

On the other hand, Cherlin is correct that working-class men are indeed less likely to be employed today than in the past. Part of the reason appears to be directly connected to the decline in marriage rates—but as an effect, not a cause. In other words, because marriage rates are down, men are less likely to engage in the labor force.

In a 2014 report published by the American Enterprise Institute, researchers Brad Wilcox of the University of Virginia and Robert Lerman of American University report that over half (51 percent) of the decrease in male employment between 1980 and 2008 (and 37 percent of the decline between 1980 and 2013) is connected to the decline in marriage. The authors note:

When young men and women replace formal commitment with informal relationships or none at all, work becomes less urgent, especially for men, who have historically taken all kinds of jobs to support their families. With no wife or children to support, men become less focused on the job market.

Wilcox and Lerman’s research shows that the greatest decline in male employment since 1979 has been among unmarried men. This trend holds true across all levels of education. The authors also point out that median family income would be 44 percent higher today if the United States had the same rate of married-parent families as in 1979.

Studies show that marriage is connected with a wage “premium” for men, and it’s not just because men with higher wages or greater earning potential are more likely to wed. Wilcox and Lerman find that married men work more hours and hence earn more on average. Married men ages 28-30 with a high school diploma or less earn an average of $17,164 more annually compared to their single counterparts. Married men between 44 and 46 years of age earn an average of $28,253 more than their single peers.

Marriage is connected with higher earnings for another reason: men and women who were raised by their married parents earn more, on average. Men who are 28-30 years of age with a high school education or less earn an average of about $4,504 more annually if they were raised in a married-parent family.

As unwed childbearing has increased, more children are raised without fathers. While both boys and girls are at higher risk for negative outcomes when raised outside an intact family, research indicates that father absence puts boys at greater risk than girls for lower educational achievement—and thus, lower earning potential.

A Changing Culture and the Breakdown of Marriage

To his credit, Cherlin does not ignore the cultural factors driving down marriage rates. In particular, he points out that the introduction of the birth control pill “contributed to a larger cultural phenomenon that began in the 1960s—the separation of sex, marriage, and childbearing.” Between 1965 and 1972, the percentage of women under age 30 who agreed “that premarital sex is ‘always wrong’” dropped markedly, declining from 50 percent to 17 percent over just seven years.

The spread of birth control and the legalization of abortion worked to disconnect sex from childbearing. It ended up disconnecting childbearing from marriage, weakening men’s responsibility as fathers. As Brookings Institution scholars George Akerlof and Janet L. Yellen put it, “By making the birth of the child the physical choice of the mother, the sexual revolution has made marriage and child support a social choice of the father.”

Not surprisingly, the rate of unwed childbearing, which had been low throughout American history, began its steady and rapid climb in the 1960s. Today, that unwed birthrate is over 40 percent among the general population. For those with lower levels of educational attainment, the rate is even higher: roughly 65 percent among those with less than a high school diploma and over 50 percent among those with only a high school diploma.

Furthermore, in the mid-1960s, the federal government began its major foray into social welfare policy with President Lyndon B. Johnson’s “War on Poverty.” This government means-tested welfare system, which has grown dramatically over five decades, diminished the need for a father as a provider, making unwed childbearing more feasible.

Helping More Families Succeed

There is no question that we must work to open the door of opportunity for more Americans. Finding ways to improve the job market is an important goal. Furthermore, college costs are expensive and students often graduate with heavy loans. Advances in online education and other innovative educational formats are springing up, giving more students access to affordable education. Finding ways to expand these educational innovations would probably give more people the opportunity to improve their career prospects.

However, policies and reforms directly targeting the job market and the educational system can only go so far. Strong families are crucial to opportunity and prosperity. A culture of marriage must be restored. There has been a cultural shift away from childbearing within marriage, and without direct efforts to address this it is unlikely that marriages and families will become more stable.

Cherlin notes that efforts to provide individuals with marriage and relationship education have been only mildly successful at best. Yet, this doesn’t mean we should give up. More effort is needed, not less. Strengthening marriage is a work that will require participation at every level of society: neighborhoods, churches, the educational system, state and local governments, and so forth. One step forward would be a public advertising campaign to spread the message about the benefits of marriage and the importance of waiting until after marriage to have children.

The US welfare system should be reformed to promote work and to reduce marriage penalties. Marriage connects parents, particularly fathers, and their resources to their children. Society loses much when marriage declines, including economic benefits. In order for families to have the greatest opportunity to thrive, the norm needs once again to be that a man and a woman are committed to each other and the children they create through marriage.

Rachel Sheffield is a policy analyst in the DeVos Center for Religion and Civil Society at The Heritage Foundation and co-author, with Robert Rector, of “Understanding Poverty in the United States.”

]]>
http://www.thepublicdiscourse.com/2015/08/14792/feed/ 0
The Unreasonableness of Secular Public Reason http://www.thepublicdiscourse.com/2015/08/14619/ http://www.thepublicdiscourse.com/2015/08/14619/#comments Fri, 28 Aug 2015 11:00:00 +0000 http://www.thepublicdiscourse.com/?p=14619

Although it may come as a surprise to some, the Constitution does not enact Mr. John Rawls’s Political Liberalism. That is to say, it is a category error to attribute to the Constitution (via the establishment clause of the First Amendment) the Rawlsian concept that “public reason” and political discourse should exclude “comprehensive doctrines” such as religious belief systems.

The accents of this argument could be heard in the Iowa supreme court’s marriage ruling in 2009, in which the court held that “religious opposition to same-sex marriage” was the real reason the state protected conjugal marriage in its law. Therefore, the judgment went, the law lacked a rational basis and was unconstitutional. Likewise, Judge Vaughn Walker of the federal district court that struck down California’s Proposition 8 claimed to “find” as a “fact” that “moral and religious views form the only basis for a belief that same-sex couples are different from opposite-sex couples” with respect to marriage. For Walker, “moral” was fungible with “religious,” and therefore Prop 8—you guessed it—lacked a rational basis.

The granddaddy of this strange argument is the view of Justice John Paul Stevens in the 1989 abortion case of Webster v. Reproductive Health Services. Stevens preposterously argued that a Missouri abortion law lacked “any secular purpose for the legislative declarations that life begins at conception and that conception occurs at fertilization” (which happen to be two uncontroversial scientific facts); that he could perceive only theological propositions at work in such legislation; and that therefore it violated the Establishment Clause of the First Amendment.

This transparent attempt to cripple legislative efforts to regulate or prohibit abortions was predicated not only on a willful blindness about the character of the arguments employed by pro-life legislators, but on a tortured reading of the Establishment Clause. For even if it were the case that prohibition of abortion rested, in the final analysis for every one of its supporters, on a theological proposition about the sanctity of human life, such a prohibition would not violate any reasonable reading of the First Amendment.

For voters and legislators to act on religiously informed moral convictions in making the law may entail a blending of religion and politics that is disquieting to the secular liberal mind, but it closes no gap in the “separation of church and state,” even assuming (as we should not) that that phrase expresses the best understanding of the Establishment Clause. No coercion to profess a religious belief or even to conform to one appears in such a law, and no advantage or special position is given to any sectarian institution in the law.

God Forbid Someone Mention God

Quite apart from the Constitution, the Rawlsian public reason norm is a philosophical mistake, a transparently result-oriented political move that, even with the best of intentions regarding the prevention of political conflict, is doomed to backfire.

The idea of “public reason” expresses a norm under which “comprehensive doctrines,” including “reasonable” ones, are to be generally excluded from public discourse on constitutional questions or matters of “basic justice.” By Rawls’s definition, comprehensive doctrines are not necessarily religious, but religious belief is the paradigmatic example. No such belief, Rawls was certain, would ever possess the free and willing allegiance of everyone in a democratic society. And so, for the sake of peace and justice, the truth claims of comprehensive doctrines must not enter the arena of political contest and debate.

Whether cast in hard constitutional-legal form or, more softly, as an ethical norm of civic life, Rawlsian public reason seems to entail a simple rule for public discourse: God forbid one should mention God—unless one immediately makes another argument wholly disconnected from religious premises.

We should beware of a philosophy in which so much work is done by the adjectives. Rawls’s repeated insistence on the public character of the reason employed in political discussion should make us stop and ask, what is the opposite of the public? It is the private. And since the counterpart to genuinely public reason, in the Rawlsian calculus, is the comprehensive doctrine, then it seems that the comprehensive and the private are equivalent terms. But it is not obviously the case that people’s comprehensive views are private things in the sense that they do or should keep them to themselves—even the “reasonable” comprehensive doctrine, which is quite possibly correct. In the case of religion, the paradigm of a comprehensive view, people frequently hold themselves out in public as believers, and even act together in churches, mosques, synagogues, ashrams, gurdwaras, temples, schools, and various other institutions of civil society.

The one undeniable fact on which Rawls pins his whole notion of public reason is that there is a diversity of such (chiefly religious) comprehensive doctrines. It is not even a fact that this diversity is necessarily a cause of conflict, although it can be and often has been. But Rawls’s evident fear of such conflict leads him to construct a liberalism that deals with religious pluralism by demanding that the comprehensive be treated as the private. In short, religion must be privatized, as a requirement of justice itself.

Critics of Rawls and His Inconsistent Exceptions

The critics of Rawlsian public reason are legion, from John Finnis and Robert P. George to David Lewis Schaefer, from Christopher Wolfe and Steven D. Smith to Jeffrey Stout. Such critics have established that Rawlsian public reason is a “ramshackle” philosophy whose true purpose is to seize the high ground for secularist prejudices.

Rawls’s bad faith is demonstrated by the exceptions he makes. Although John Finnis, for instance, has offered natural law arguments against homosexual conduct that are perfectly accessible to reason and grounded on no theological presuppositions, these arguments provide Rawls with his one and only example of a secular “comprehensive doctrine” that must be classed with religion as beyond the pale. Because arguments of this kind are expressions of “moral doctrine,” they “fall outside of the domain of the political”—the domain, that is, of public reason. This distinction between the domain of the moral and the domain of the political seems utterly arbitrary, especially since the entire project of Rawlsian public reason is, on its own terms, an attempt to construct a moral framework for political life.

The other notable exception made by Rawls is for the Christian motivations of the abolitionist and civil rights movements. Religious discourse such as Rev. Martin Luther King Jr.’s is permissible, Rawls says, “when a society is not well ordered and there is a profound division about constitutional essentials,” such that “nonpublic reasons” are thought to be “required to give sufficient strength” politically to “the ideal of public reason.” This exception appears to have been introduced to rescue Rawls from the embarrassment of condemning Reverend King. For what did King and his adversaries represent but a deep conflict over deep principles, resolvable only by choosing between two competing comprehensive doctrines?

Rawls disapproves of arguments against homosexual conduct, and approves of arguments in favor of equal civil rights regardless of race. He cannot, it seems, resist the urge to permit one of those arguments despite its being religious, and to exclude the other despite its being non-religious. This is not philosophy, but political base-stealing.

Rawlsian public reason is more likely to cause conflict than to reduce it. It’s the Chris Christie of public discourse, telling religious citizens to “sit down and shut up.” Rawls admits that “liberty of conscience” is one of the “constitutional essentials” in any liberal political order. This is good to hear. But he also says “separation of church and state . . . protects religion from the state and the state from religion; it protects citizens from their churches and citizens from one another.” This is “separation” with a decidedly secularist bias. It fails to give liberty of conscience the freedom to be active in the world as a witness to faith in word as well as deed.

Religious Discourse in the Public Square

Rawls’s Political Liberalism, for all its popularity and influence, was decisively rebutted by a better book nine years before its publication—The Naked Public Square, by Richard John Neuhaus. Since Neuhaus too wrote of an “obligation” religious believers have to “translate” their most religiously inflected arguments into reasons that people of other dispensations are willing to accept, some readers have seen no great difference between his view and Rawls’s. This is a serious misunderstanding. For Neuhaus, the idea of “public reason” is exactly what Rawls denied it was: a way of creating a diverse society in which various religions, and non-religious views, interact in democratic decision-making.

Neuhaus did not argue that “comprehensive doctrines” are, by virtue of being comprehensive, therefore suspect—i.e., incapable of being made accessible to others and thus necessarily private. Neuhaus’s argument was exactly the reverse. Democracy needs its “comprehensive doctrines” in the forefront of citizens’ consciousness, or else the state becomes its own totalizing comprehensive doctrine. As he put it, “a perverse notion of the disestablishment of religion leads to the establishment of the state as church.”

There is no compelling reason of principle for religious citizens to refrain from employing religious discourse in the public square. They must, of course, reason together with their fellow citizens in order to persuade others of their policy views. But if their major premises, so to speak, are theological, there is no harm done, so long as their policy conclusions can be reasonably embraced by others who have different commitments.

The attribution of a “strictly religious” motivation to a policy view offers an incomplete account of how people actually reason in political life. Beliefs that may be called “strictly” religious or theological typically supply only a major premise for a policy conclusion. The minor premise will usually be supplied by other considerations—of cost, of prudence or practicality, of justice to others, of forbearance toward those same others. Even “thou shalt not kill,” for instance, is not a principle that by itself can lead straight to anything in public policy—not even a coherent homicide law—without intervening minor premises that will tell us when, how, and with regard to whom the principle will be applied.

Some liberals are fond of arguing that conservative positions on abortion and marriage, for instance, are only held for “strictly religious reasons.” To my knowledge, they have failed to establish even the descriptive accuracy of this claim. But even if it were true without exception that all persons taking the conservative positions on these issues began with religious major premises about “what God commands” about human relations, it would amount to no disrespect of others.

“God commands respect for human life” or “God commands the virtue of chastity in sexual relations” is hardly the stuff of disrespect. It’s an invitation, the beginning of an argument. You can reject the invitation, or begin the argument another way, or demand a “translation” into terms you find more accessible. Maybe you’ll get one. But the policy conclusion—to protect human life from conception to natural death, or to define marriage as a conjugal union of a man and a woman with a view to raising any resulting children together—cannot credibly be called an imposition of a “strictly religious” view by coercive law. For it is nothing like requiring adherence to any particular view of the human person’s relationship to whatever divine reality there may be. It is not even a demand that we conform our behavior in accordance with the propositions stated by such a view. It is nothing more than the application of an ethical stricture to the legal environment, and it can be debated as an ethical stricture and as a policy worth pursuing—or not—on strictly practical grounds.

As Justice Robert Jackson said over seventy years ago, “freedom to differ is not limited to things that do not matter much.” To close down debate with a “that’s strictly religious” objection is the opposite of liberalism, and there is no justification for it.

Matthew J. Franck is the director of the William E. and Carol G. Simon Center on Religion and the Constitution at the Witherspoon Institute. These remarks were prepared for a symposium on “Religion and Public Discourse” at Case Western Reserve University Law School on March 6, 2015.

]]>
http://www.thepublicdiscourse.com/2015/08/14619/feed/ 0
What Science Doesn’t Know http://www.thepublicdiscourse.com/2015/08/14645/ http://www.thepublicdiscourse.com/2015/08/14645/#comments Thu, 27 Aug 2015 11:00:02 +0000 http://www.thepublicdiscourse.com/?p=14645

The feature article of the March issue of National Geographic attempts to explain the results of a January 2015 Pew Research Center report that demonstrates how many Americans seem to be out of step with the triumphal march of modern science. Not only are decreasing percentages of the American public expressing positive stances toward science in general, but many are rejecting outright the scientific consensus on several key issues. When it comes to topics such as evolution, climate change, vaccination, population growth, and GMOs, large numbers of ordinary people in the US seem to think that they know better than the scientific community. How could so many people—a substantial number of them highly educated, no less—be so backward?

According to the article's author, it’s because “the scientific method leads us to truths that are less than self-evident, often mind-blowing, and sometimes hard to swallow.” “The scientific method is a hard discipline,” requiring us to repress the “naïve beliefs” to which we tend to cling like a child does to a tattered and useless blanket. “Science tells us the truth rather than what we’d like the truth to be,” jolting us awake from our intuition-induced and religion-reinforced stupor. The conclusion to which the author is led is that “scientific thinking has to be taught.” Ordinary Americans must be dragged out of the cave of naïve pre-scientific thinking and brought into the light of day where they can see and understand what scientists have been trying to tell them.

Scientist-Kings?

The cave analogy is particularly apt here, for the argument represented in this article (and repeated in many other places) is not merely that science is valuable because it furthers our understanding of the world in which we live. The scientific method that characterizes the scientific profession is, in fact, the only way to really understand the world in which we live, and as such should be “our only star and compass” (to paraphrase Locke) when formulating public policy. Science education isn’t important the way taking a child to a local discovery museum is important; it’s important the way Plato’s philosopher-king is important.

The crux of the argument is far from new, and was put best by Plato millennia ago:

Until [scientists] rule as kings in their cities, or those who are nowadays called kings and leading men become genuine and adequate [scientists] so that political power and [science] become thoroughly blended together . . . cities will have no rest from evils . . . nor, I think, will the human race.

Of course, Plato speaks of “philosophers” rather than scientists, but in the self-presentation of modern science these amount to the same thing. “Science” simply means “knowledge,” and “wisdom”—the Greek “sophia,” from which we get “philosopher”—means knowledge of the highest things or of the whole. And so we are brought to the real exposed nerve of the modern scientific method and the myriad modern scientists it has spawned: namely, that this scientific method presents itself as the way of knowing absolutely everything there is to know. Of course, the ultimate goal of knowing everything may never be reached—the same way “philosopher” means “lover of wisdom,” not “possessor of wisdom”—but the scientific method is offered to us as the only avenue of approach to this goal.

As the modern guardians of all knowledge, scientists wield a tremendous amount of power. And like Plato’s philosopher-kings, some scientists have engaged in the dissemination of “noble lies” for the purpose of aligning public policy with their judgments of what is desirable for all of us. As the author of the National Geographic article admits, even scientists are susceptible to “confirmation bias,” or the tendency to tailor their interpretations of the evidence to the theories and predilections they unavoidably bring to their work. Scientists are human beings too, subject to precisely the same “will to power” Nietzsche ascribed to philosophers.

This danger has become evident in recent years in the cases of embryo science and climate change. In the case of embryo science, as is shown in a 2006 exchange Patrick Lee and Robert George had with Lee Silver, it is clear that at least some policy-minded scientists distorted key scientifically-established facts in order to further the political agenda of embryonic stem cell research. And in the case of climate change, the Climate Research Unit of East Anglia University has been twice embroiled in scandal over the release of numerous emails that clearly belie the usual story of an objective scientific consensus on the issue.

Power corrupts and knowledge is power, and so it should come as no surprise that some scientists succumb to the temptation to use their position as the gatekeepers of knowledge to further their political influence. Political influence, moreover, often translates into economic advantage, another universally potent motivator for human beings. Dishonest scientists exist and should be exposed; but what of the many honest, competent scientists? Should they be treated as scientist-kings?

Can Science Know Everything?

Science is often juxtaposed with religious belief in popular discourse as the two primary—and opposed—pathways to understanding the world. Religious believers argue that the scientific method runs up against a limit in its quest to know everything, and that this limit marks the starting point of faith. Scientists tend to bridle at this proposed limitation. Perhaps this is because the objects of religious belief—God, Heaven, Hell, the Devil, Angels, etc.—would, if they did exist, obviously be more important and compelling than the objects of scientific knowledge.

I would argue, though, that this sort of argument regarding the limitations of the modern scientific method already concedes far too much to scientific pretensions. One need not even go beyond the realm of mundane, ordinary, everyday human life to see clearly that the reach of modern scientific knowledge stops well short of what is most important to human beings. Modern science might have the teeth, and certainly the roar, of a T-Rex, but it also has its arms.

Take the recent movie Gravity. This film provided the most stunning widely-viewed visual depiction ever seen of outer space, perhaps the most widely intriguing object of modern science. The stars of the movie, though, were not the actual stars, but, rather, Sandra Bullock and George Clooney. Nor, moreover, were Bullock and Clooney of interest because of any of their scientifically-accessible features, such as the physical composition of their bodies, the chemical reactions going on inside them, or their medical health.

Bullock and Clooney were of interest because of their relationship with each other, their relationships with those they had left behind on Earth, and their relationships with themselves. They cared about each other. They experienced happiness, despair, hope, and love. When Clooney’s character was lost, much more had been lost than his physical-chemical existence; he even reappeared to save Bullock’s life after this scientifically-analyzable aspect of his existence was presumed to be long gone.

There is a reason why these elements of the movie were the most compelling ones to most viewers, and it’s not that most viewers are “naïve” and deficient in scientific education. Things like happiness and love are simply much more important to human life than astronomy and astrophysics, as “mind-blowing” as these undeniably are. People care much more about being happy, finding love, fighting for justice, and securing peace than they do about the chemical composition of the atmosphere—and they should. The scientific method can certainly tell us quite a bit about the physical, chemical, or otherwise material epiphenomena surrounding the things that are most important to our lives as human beings, but it can’t even begin to analyze or understand these things in themselves.

Upon seeing a loved one, for example, there are all sorts of scientifically measurable and analyzable chemical and physical changes in one’s body. These changes captured by the scientific method and understood by the scientist, though, aren’t themselves the love that is experienced. If one remains steadfast in claiming that such scientifically accessible properties are in fact constitutive of love, then one is merely claiming that what we mean to signify by the term “love” doesn’t exist—a claim that is ridiculous on its face. And such is the case even more clearly for more abstract concepts such as justice or peace. Because these things aren’t made of stuff that the scientific method can get its hands on, does that mean they don’t exist? Or that we can’t know anything about them?

Science and Public Policy

This brings us back to the puzzlement of the National Geographic article’s author, who cannot comprehend the failure of Americans to allow scientific facts and the various consensuses of scientists to dictate public policy on issues such as climate change, evolution education in schools, GMOs, population growth, or vaccination.

Perhaps it isn’t the ignorance or naiveté of ordinary, non-scientific Americans that prevents them from accepting what scientists tell them; perhaps it’s their knowledge of and experience with realities which they rightfully judge to be more important than the objects accessible to modern science. Perhaps it isn’t that “scientific thinking has to be taught” to non-scientists; perhaps it is scientists who should learn from the rest of us.

Adam Seagrave is an assistant professor of political science at Northern Illinois University and author of The Foundations of Natural Morality: On the Compatibility of Natural Rights and the Natural Law.

]]>
http://www.thepublicdiscourse.com/2015/08/14645/feed/ 0
Slaying the Hydra: Can Virtue Heal the American Right? http://www.thepublicdiscourse.com/2015/08/14569/ http://www.thepublicdiscourse.com/2015/08/14569/#comments Wed, 26 Aug 2015 11:00:27 +0000 http://www.thepublicdiscourse.com/?p=14569

We’ve come to that agonizing point in our political process when each political party must choose its champion. Republicans are trying to decide in whose hands to place their party’s fate. Perhaps the uninspired but reassuringly American Scott Walker? The inexperienced but well-spoken Marco Rubio? Rand Paul, a man of intelligence and conviction who nonetheless selected drone strikes as the issue most worthy of a filibuster? Or should we throw everything to the wind and pick a buffoon with a giant wallet for his soap box?

The stakes are high. America sits in the shadow of a militant secular culture that seems determined to subdue everything in its path. Liberal Democrats have lashed themselves firmly to the mast of that dominant culture, and by doing so have won a political edge. Our mainstream cultural institutions eagerly promote their values and often their candidates as well. Meanwhile, on the conservative side, we obsess about messaging, demographics, and electoral ground games, and while those do merit attention, the hard decisions will ultimately revolve around one central problem. Conservatism has become countercultural, and it’s hard to win elections from a countercultural platform.

At the heart of this debate lies a brutally simple dilemma: we can either move ourselves in the direction of the mainstream culture, or we can continue trying to persuade the culture to move back toward us.

As usual, the right choice is also the harder one. Our liberty will never really be safe among a citizenry that disregards virtue. If conservatism throws away its other commitments in order to compete for progressive hearts, it may as well just not exist. However far our compatriots stray from natural law, we must continue to call them back to prudent ways of living, reminding them of the manifold benefits of discipline, self-sacrifice, and virtue. Unfortunately, many of our allies have grown apathetic or even hostile to this fundamental work.

Small Statism and the Lesson of the Tea Party

Within modern conservatism, there is presently a great deal of support for what we might call “small-state minimalism.” Minimalists get enthused about plans to “small up and simple down,” not just our government but also our conservative message and philosophy. Instead of conserving traditional ideals and values, they argue, we should focus our political efforts on preaching small-state principles. Lower taxes, reduce regulations, and try to dismantle the administrative state as much as possible. Throw a bone to the religious by promising to defend freedom of religion, but more generally, try to diminish the government’s intrusion into the lives of ordinary people. Silence our preaching about abortion, marriage, and especially sex.

I understand the appeal of this approach. It revolves around a simple, understandable objective, which resonates with people in an era of intrusive, overbearing government. In a live-and-let-live way, this message still seems attractively principled and focuses on a time-honored conservative principle. And it instills a sense of urgency in grassroots conservatives, given the alarming growth and rank corruption of the state, particularly under the present administration.

Small-state minimalism also promises a neat solution to the still-raging culture wars. By unlinking cultural conflict from the aggressive arm of the state, minimalists think we can dissociate ourselves from politically damaging conflicts that they mostly regard as lost. Religious conservatives are free to continue their efforts to convert the heathen at a grassroots level, but in the meanwhile, shrinking the state may open a space for conservatives to live their lives more peacefully (while also winning some elections).

There is a serious problem with this plan: It won’t work.

Winning a Battle, Losing the War

Small-state minimalism may win a few battles, but it will lose the war. That’s because it misunderstands the relationship between our militant secular culture and its political counterpart, the modern administrative state. We cannot unlink them; they are the same foe. Conservative minimalists imagine that they have devised a principled and practical way of escaping the quagmire in which we find ourselves. In reality, they are laying down their arms even as the enemy’s most fearsome titans take the field.

The Tea Party teaches some useful lessons here, in both its successes and its failures. It generated a wave of conservative support following the passage of Obamacare. Eventually, that energy ebbed, but it would be quite wrong to suggest that the Tea Party ultimately “failed.” The priorities of the Republican Party were dramatically restructured in response to its critique. Serious Republican candidates are now under considerably more pressure at least to pay lip service to small government ideals. That’s a major accomplishment for a grassroots movement. Americans have become seriously concerned about the growth of the state, and the Tea Party challenged that energy and ultimately brought it more into the mainstream of public life.

In other respects, however, the Tea Party was lamentably unsuccessful. It never matured into a respected, mainstream political movement. Within a few years, the American public mostly came to regard it with suspicion and distaste. We might blame the so-called “establishment” Republicans for this, given their lack of eagerness to mentor their Tea Party colleagues into seasoned legislators. Nevertheless, the movement’s deeper problems are philosophical, not practical.

The Tea Party was a response to a particularly egregious instance of anti-democratic governmental overreach: Obamacare. That reaction was entirely good and proper, but it wasn’t grounded in a substantial conservative vision. Lacking that foundation, the Tea Party’s energy was bound to fade. And its legacy has been an even greater reticence on the part of movement conservatives to create a platform of substance.

Slaying the Hydra

With our secular enemies engineering a coordinated attack on every front (political, cultural, and spiritual), our responses are half-hearted and piecemeal. We can understand better the failures of political conservatives if we draw an analogy between liberal progressivism and that ancient mythical monster, the hydra.

In Greek mythology, the hydra is a large reptilian beast with multiple serpentine heads. If one head is severed, two more grow in its place. A warrior intent on slaying the hydra would understandably tend to fixate on whichever head was actively threatening to devour him, but ultimately this was not a recipe for victory. In order to destroy the beast, it is necessary to deal with the monster in its totality.

The modern administrative state and our militant secular culture are like two heads of a single hydra. The modern state is a kind of secular church, wherein secular progressives pursue the only kind of fulfillment they think possible for humankind. The size and intrusiveness of the modern state mirror the strength and aggression of our secular culture. But the state also helps to create optimal conditions for the further entrenchment of secular ideals, by undermining natural community and fostering vice. It saps the strength and natural resources of its citizens, until they are finally unable to resist its incursions on their liberty.

In short, the state and its supportive culture are part of a single whole. Neither can be killed while the other lives, and by fixating too wholly on one, we risk leaving the other to build in strength, ultimately paving the way for a resurgence of both.

Bush-era Republicans already made this mistake. While their attention was largely fixated on cultural and moral problems both here and abroad, the administrative state was permitted to grow and metastasize. For a time, it seemed that they were making progress. Then came the 2008 election, when the secular faith came surging back under the leadership of a new political Messiah.

While we were distracted with Iraq, and with a nest of thorny cultural issues, the statist component of the liberal monster was gorging itself. This left Barack Obama a roomy and attractively refurbished secular church, even as Republicans collapsed into a morass of doubt and self-recrimination. Of course, Obama showed no reticence in undertaking further renovations once he was in office.

Chastened small-state conservatives think they have learned from the failures of the Bush Republicans. They have not. The lesson they have drawn from twenty-first-century politics is that the state is the true enemy. But the real moral is that a hydra must be battled in toto. If we allow ourselves to fixate on particular heads, it will assuredly kill us in the end.

Small-state minimalists advise us to delete our moral and cultural critiques of secularism from the Republican platform, pursuing instead a non-intrusive and neutral state. But this solution appears principled only to those who have already accepted the secularist’s version of what a “neutral” state should be. Minimalists claim to be interested only in liberty, but they fail to understand that the naked public square is itself a completely secular ideal. In the interests of preserving freedom, they wish to crown secularism as our de facto national faith. Their promises of support for religious autonomy are, in many individual cases, sincere. But secularism will never consent to leave its hated Judeo-Christian parent unmolested.

Ironically, small-state minimalism is a losing strategy even for Republicans with libertarian leanings. Supposing we could succeed in beheading the administrative state, the reality is that a thriving secular culture will never be satisfied with modest, non-intrusive government. Secularism is spiritually impoverished, and its eschatological horizons are all political. Its appetite for re-ordering human society is insatiable. Any setback in its statist ambitions will be but temporary, unless we can revitalize our culture and incorporate a robust appreciation of natural goodness into our political efforts as well as our private ones.

The path of the small-state minimalist leads, at best, to a pyrrhic victory. As conservatives fixate on battling the state, secular culture will be left to gorge itself. Apparent short-term political gains will be followed by catastrophic losses as the liberal monster surges forth with renewed strength and vigor.

Can Virtue Heal the Right?

Human societies tend to be shaped by a multitude of unexpected developments: wars, technological advances, economic or demographic shifts, and so forth. The same myopic shallowness that enables progressivism to encode its philosophy into easily consumable, attractive memes can also prove a liability when challenges arise for which progressives have no ready answers.

Taking a longer view of things, therefore, we should recognize that despair is not yet warranted. We do, however, need to put our own house in order. The American right must renew its commitment to virtue if it is to survive.

This does not mean that we should eliminate all distinction between positive law and natural law, criminalizing all vices and mandating virtue. On the contrary, virtue-interested conservatives have a high respect for personal integrity, freedom of conscience, and the natural community (especially the family). Unlike progressives, we have no expectation that good government can draw the human race toward a shining horizon of politically achieved human perfection. We emphatically do not wish for it to try.

We should, however, try to ground our political institutions in a substantial and realistic view of human good. Our aim should be to construct a society that bolsters the natural benefits of virtue instead of tearing them down. We should cherish our liberty, but always with a sober understanding of what liberty is for, and of the many ways in which vice and corruption can undermine the conditions that make true freedom possible.

This is the true answer to America’s political and moral dilemma. In the face of a fearsome progressive enemy, we must counter with a vision of our own that is equally comprehensive, but demonstrably more pragmatic, more principled, and more grounded in a right understanding of the human condition. Only when we can unite around such a vision will the political right be able to speak with the strength and authority that it needs to reclaim our republic.

Rachel Lu teaches philosophy at the University of St. Thomas.

]]>
http://www.thepublicdiscourse.com/2015/08/14569/feed/ 0
In the Cave: How Automation Changes the Way We Interact with Our World http://www.thepublicdiscourse.com/2015/08/14514/ http://www.thepublicdiscourse.com/2015/08/14514/#comments Tue, 25 Aug 2015 11:00:34 +0000 http://www.thepublicdiscourse.com/?p=14514

Nicholas Carr has titled his latest book, which investigates the dangers of widespread automation, The Glass Cage. The title alludes to a shift in the display of information in an airplane’s cockpit from an older analog model to an electronic LCD screen. Among other things, the turn to electronic display rendered the plane’s flight engineer obsolete. But with increased automation in the cockpit, even the pilot, who spends an average of only three minutes per flight actually working the controls, now seems redundant. The glass cockpit has become a glass cage.

Readers of Carr’s interesting and disturbing book will perhaps come away with a different image, also suggested by his title: the image of Plato’s Cave. As in the cave, where image substitutes for reality and connection to the real world is lost, so in Carr’s rendering does ubiquitous automating technology threaten to leave us in a mere “shadow of the world.” The Platonic overtones are unmistakable, though Carr does not explicitly acknowledge them.

Carr returns often to the condition of modern pilots. The degree to which their work has been automated is considerable, and both the benefits and the costs of that automation are apparent. Air travel is vastly safer now than even sixty years ago—far safer, in fact, than driving. Automation also cuts costs: in those same sixty years, the professional staff of a typical airliner has shrunk from five, with a navigator, a radio operator, and a flight engineer, to two, with only the pilot and co-pilot remaining. Not a benefit for the professionals, but surely a less costly way to fly.

On the other hand, some dangers must be chalked up to automation, as disuse of manual skills causes those skills to atrophy. In an emergency, pilots can show diminished quality of response, occasionally resulting in tragic and lethal pilot error.

The occasional and preventable accidents that result might seem like a small price to pay for a safety record that, on balance, is more than impressive. But Carr is after something bigger than a look at the safety risks that automation may pose when automation complacency and bias lead professionals (such as doctors) to overlook or ignore essential data. For Carr, more central than such risks is the way that technology negatively affects our agency, our cognition, and our connection to the world.

Automation and Agency

Let us start with the question of agency. What impact does automation—the replacement of human activity by computer-guided technology—have? One consequence is a decrease in skill. Like the pilots who do not need to exercise any manual flying skills, other professionals also suffer erosion of talents that were once deemed essential. So, for example, architects once prized the ability to sketch; now, computer-aided design (CAD) allows architects immediately to move to design without first working through a series of sketches of increasing adequacy. Indeed, CAD software cleans up errors and inadequacies in first drafts, moving the architect even further away from a need for what was once considered forms of basic competency.

Agency is further affected by distancing agents from the world in which they work. For doctors, this can mean paying attention to a tablet screen rather than a patient. For a pilot, it can mean no longer having the tangible mechanical connection to the various parts of the aircraft he controls. This distancing is tied to the diminution of skills: Part of any difficult skill is recognizing what is salient in the world to its exercise. But distance from the world erodes that recognitional capacity. Just ask the Inuit, whose reliance on GPS screens has corrupted a centuries-old ability to navigate seemingly marker-less landscapes of snow and ice with uncanny accuracy.

Agency is eroded when we rely on software designed by others because that software implicitly realizes the presuppositions of its authors: They are the creators who tell the machine what to look for and how to respond, removing from the lives of professionals such as doctors or lawyers the need—and therefore the ability—to judge for themselves what is important in any given situation.

Automation and Autonomy

Carr frequently speaks of the way that technology diminishes autonomy. He should not be misunderstood to be speaking only of autonomy in the modern liberal sense: doing what one wants, unencumbered by obligation or connection to others. On the contrary, when a doctor’s professional autonomy and judgment are replaced by reliance on a mediating computer screen, the doctor’s real human connection to her patient and her real exercise of human agency is diminished.

Proponents of automation frequently promote what Carr calls the “substitution myth.” This term refers to the idea that automation merely takes the place of some discrete activity that imperfect humans used to do. Automation, on this view, leaves everything as it was before, save that it lightens the workload for some overburdened person. But this is false.

Rather, automation brings with it a change in how we think. Automation complacency and bias are important here. Our reliance on automation leads to errors of attention and inference. But the threat is even more worrisome.

Is Automation Making Us Stupid?

Consider the role, discussed at length by Carr, that effort and friction play in learning and remembering. Subjects asked to memorize pairs of antonyms do better if they are initially asked to fill in missing letters of one of the words—for example, instead of being given “hot” and “cold,” they are given “hot” and “c---.” Subjects asked to recall immediately what they have been shown do better at later recall than subjects who merely receive the information over a series of sessions and are only later asked to recall it.

This phenomenon also occurs in the acquisition of tacit knowledge: the struggle to ride a bike, drive a stick shift, play on pitch, learn a foreign language, or cook a risotto takes place through time; it involves inevitable failure and effort, but such a struggle results, when successful, in a deep internalization of “tacit” knowledge. This knowledge imparts the ability to draw on what is known without even bringing that knowledge to the level of full and articulated consciousness.

But if, as Carr argues, automation turns us largely into observers, then its effects may be expected to be deleterious where the development of tacit knowledge and memory are concerned. And this suggests, as do many other points made by Carr, that automation is making us stupid (Carr himself suggested this in a 2008 essay in The Atlantic, “Is Google Making Us Stupid?”).

Carr pursues these points via a discussion of “embodied cognition”: the idea that thinking, for us embodied beings, requires doing. That is, thinking requires an extension of mind out into the physicality of world in a bodied engagement with that world. Think of the way an infant learns: by touching, tasting, grasping, breaking. Such engagement with the world should not stop at infancy. Nor, we might add, should it be cut off in infancy; although not discussed in this book, readers should leave worried if they have been allowing their young children unfettered access to tablets, smartphones, or computers.

What Happens When Work Doesn’t Connect Us to the Physical World?

What are the long-term consequences of these failures of agency and cognition? I shall focus on just one.

Work, like play, is a basic aspect of human flourishing. We are inclined to deny this in the face of work that is tedious, monotonous, unskilled, and boring, work to which it is impossible to be attentive and mindful. But at its best, work allows us engagement with the materials of the world in ways that utilize long-cultivated skill and hold our attention, giving us a sense of flow, which Carr discusses, following psychologist Mihaly Csikszentmihalyi. Good work surely connects us more deeply to the world around us, and gives us opportunity to impress our character on that world in lasting ways.

But real work requires agency and cognition. It is impossible without autonomous engagement with the things of the world, without planning, without awareness of how the world is and of how it offers or threatens to push back against our intention. It is impossible without the developed skills that enable us to overcome that pushback, even transform it according to our will so as personally to shape the product that is the proximate end of our work, whether that product be a safe flight, a medical diagnosis and cure, or a new building.

What is left when work has been evacuated of agency and cognition? We return to Plato, and his image of human beings chained by the neck and able only passively to watch the shadows, secure from the sun. Carr’s honest engagement with the perils of automation should make us eager for the climb back out of the cave. Carr offers only a few suggestions as to how that might be done, directing us, for example, toward attempts to create more human-centered automation, in which persons are engaged rather than passive.

The topic clearly requires more attention. But Carr deserves recognition for articulating the need for that attention in a smart and focused, if ultimately rather ominous, way.

Christopher O. Tollefsen is Professor of Philosophy at the University of South Carolina and a senior fellow of the Witherspoon Institute. He is the author of Lying and Christian Ethics (Cambridge, 2014).

]]>
http://www.thepublicdiscourse.com/2015/08/14514/feed/ 0
Complementarity: Lessons from the Adams Family http://www.thepublicdiscourse.com/2015/08/14857/ http://www.thepublicdiscourse.com/2015/08/14857/#comments Mon, 24 Aug 2015 11:00:12 +0000 http://www.thepublicdiscourse.com/?p=14857

Alexis de Tocqueville called “the strength of American women” the great secret of the strength of the American republic. Likewise, strong women were the backbone of the Adams family and its contribution to the political integrity of the American republic.

The letters of John and Abigail Adams were first published by their grandson, Charles Francis Adams (President Abraham Lincoln’s ambassador to Britain). The diaries and letters of John Quincy Adams’s wife, Louisa, have just been published this year, a project explored by their grandson, Henry Adams. Henry Adams—who called Tocqueville’s Democracy in America the “bible of my own private religion”—worried that American men and women were losing their appreciation for the complementary strengths and gifts of men and women. He thought the best remedy was to hold up for Americans the image of his grandmother.

Part of the charm of John and Abigail Adams’s letters (which are frequently addressed with terms such as “Dear Miss Adorable” and “My Dearest Friend”) is the way that the two weave a life-long conversation about universal human virtue into their ongoing inquiry into their complementary contributions to mothering and fathering their “little flock” of children. Louisa Adams had a hard act to follow in such a renowned mother-in-law, but her journals reveal a woman of profound reflection—on the meaning of piety (filial, patriotic, and religious) and her role as daughter, wife, and mother.

The Only Perfectly Balanced Mind in the Adams Family

Louisa Adams is the only American First Lady to have been born in a foreign country. Her American parents were living in London when she was born, and during the Revolution she was educated in a French convent school. She married John Quincy Adams during his diplomatic work, and the two immediately set off for Prussia, where he was stationed. She had already endured a number of difficult miscarriages and given birth to her three sons—little George Washington, John II, and Charles Francis—before she made her first visit to her “native” land.

She was soon called on to leave her two oldest sons behind and accompany her husband to Russia with her youngest child. In Tsarist Russia, she endured a very personal “winter”: painfully separated from her older children, she lost her only infant daughter, received news of her own mother’s death, and had to travel alone from St. Petersburg to Paris in the midst of the Napoleonic wars.

Late in life, she bitterly reproached herself for having left her oldest sons behind while she accompanied her husband on his diplomatic mission to Russia. Both of those sons were deeply troubled youths, impregnating women before wedlock, and dying young from drink or suicide.

Only the son whom she had kept close to her carried on the family tradition of public service, political integrity, and a lifelong happy marriage with many children. In fact, his sons marveled at Charles Francis Adams as a man “singular for mental poise.” It seemed to them that it was the influence of his mother who had balanced the analytical tendencies of his father, which often led members of the Adams family to deep depression, bitterness, and drink. With a good dose of both mothering and fathering, he struck his sons as “the only perfectly balanced mind that ever existed in the name.”

Sexless as the Bees

When her grandson Henry Adams read Louisa’s diary, he was inspired to compare Russia to Woman, Woman to Russia—the two conservative forces at the beginning of the twentieth century. Russia acted as a conservative political force in Europe comparable to Woman as a conservative social force in America. Perhaps with the help of Russia the threat of an imperial Germany could be contained. Perhaps with the help of Woman the threat of a corroding materialism might be contained.

Each was powerful, but each powerfully directed its forces internally. Russia, he argued, had its axis of rotation around the Church and around agricultural production to feed its enormous population. It had not yet turned its forces toward industrialization and modernization. If Russia’s forces were ever to be ripped from its axis by, let’s say, an atheistic modernizing revolutionary regime, Adams wrote in 1905, it might destroy all of Europe.

But if Woman were to be torn from her axis of rotation around the cradle, it wouldn’t just destroy Europe—it could destroy human society. In his wide-ranging third-person autobiography, The Education of Henry Adams, Adams observed:

The woman’s force had counted as inertia of rotation, and her axis of rotation had been the cradle and the family. The idea that she was weak revolted all history; it was a palæontological falsehood that even an Eocene female monkey would have laughed at; but it was surely true that, if her force were to be diverted from its axis, it must find a new field, and the family must pay for it. So far as she succeeded, she must become sexless like the bees, and must leave the old energy of inertia to carry on the race.

In this context of a Tocquevillean fear of whether Russian absolutism or American freedom would triumph in the coming century and whether the American triumph would indeed be a victory for liberty or for the incessant pursuit of petty and paltry pleasures that “enervate the soul and noiselessly unbend its springs of action,” Adams held up, at the beginning of The Education, the image of his grandmother.

Education by Grandmothers

Henry Adams recalled his grandmother as the very portrait of religious peace. She was “a peaceful vision of silver gray” presiding over her old president, her china tea set, and her box walks. She was refined, gentle, fragile, delicate, and remote. As a boy, Adams “knew nothing of her interior life” but he sensed that she was “exotic”—that she did not belong wholly to the political world of Boston, with its confidence that a political utopia could be achieved on earth.

After “being beaten about a stormy world” and enduring a life of “severe stress and little pure satisfaction,” it was clear that she placed her hopes in eternity. The political world believed that absolute democracy, state education, and total freedom of speech would usher in the end times of history. The political creed was that “Human nature works for the good and three instruments are all she asks—Suffrage, Common Schools, and the Press.” All doubts of this creed were political heresy.

From his grandmother, Henry Adams gleaned his sense that there were values beyond the negotiable values of politics. Without her, his education would have been all Mars and no Venus—a purely political education with no religion—all war and no contemplation of beauty, all work and no genuine leisure. In the “Boston” chapter of the Education, Adams wrote:

The children reached manhood without knowing religion, and with the certainty that dogma, metaphysics, and abstract philosophy were not worth knowing. So one-sided an education could have been possible in no other country or time, but it became, almost of necessity, the more literary and political. As the children grew up, they exaggerated the literary and the political interests.

Without religion, the poetic imagination became utopian, and the political imagination became utilitarian. Americans worshipped “the Dynamo”—of the physical and mental energy of man—and were utterly lacking the cult of “the Virgin”—the contemplation of the laws of Nature and Nature’s God, and openness to the grace needed to fulfill them.

Louisa Adams always felt herself a fish out of water in a culture that gave more attention to the virtues of equality than to the virtues of piety and pity. She saw her role as wife and mother as that of an educator in the refined art of deference and condescension—the discernment and acknowledgment of realities of inequality, weakness, strength, power and infirmity, and a careful attunement to the unequal duties such differences gave rise to.

She considered a culture that saw nothing but equal duties, equal rights, and relations of convenience and mutual interest as a tyranny against her own nature. “My temper is so harassed and I am I fear so imbued with strange and singular opinions, and surrounded by persons with whom it is decidedly impossible for me to agree,” she exclaimed. “I feel that I have strange exaggerated ideas on most subjects which must be utterly incomprehensible but are utterly impossible for me to eradicate.”

One of her most ineradicable ideas was the sacredness of the bond between parents and children, the most palpable bond of piety and pity. She wrote:

It has been the fashion to say that as Children were not born to please themselves, no real ties bind them to their parents; and that blood relationship, should exact neither affection or gratitude—To my mind there is no truth whatever in such an assertion . . . From the moment of the birth, we incur a vast debt of gratitude which a life cannot repay

. . . There must be a great dereliction in that mind which could for a moment shrink from the acknowledgement of so vast a debt: founded in the weakest of all vanities, self idolatry!

Louisa Adams herself attributed the deepening of her sense of piety and pity to her experience of motherhood. She wrote that her religious opinions and sentiments had “‘grown with my growth, and strengthened with my strength’ though until I became a Mother; perhaps not properly weighed and considered; one more of precept, habit, and example than of meditated reflection.”

Adams worried that as Americans pursued a culture that was “sexless as the bees,” the complementarity that had produced the remarkable personal poise of his father—the balanced judgment, the eye for both contractual duties, and the more delicate bonds of affection—would be lost. To his mind, the proper education of the young American required both father and mother, grandfathers and grandmothers.

Susan Hanssen is an associate professor of history at the University of Dallas.

]]>
http://www.thepublicdiscourse.com/2015/08/14857/feed/ 0
Amy Kass: Teacher, Truth-Teller, Defender of Dignity http://www.thepublicdiscourse.com/2015/08/15561/ http://www.thepublicdiscourse.com/2015/08/15561/#comments Fri, 21 Aug 2015 11:00:08 +0000 http://www.thepublicdiscourse.com/?p=15561

In 1961, Amy Apfel was united in matrimony to Leon Kass, creating one of the most beautiful marriages—and fruitful intellectual partnerships—anyone can imagine. On Tuesday evening, Amy Apfel Kass died after a long and truly valiant struggle against cancer. The loss for Leon is incalculable. But the same can be said for the loss for the rest of us. For Amy Kass, one of the noblest souls God ever created, was a national treasure.

Like her husband, Amy was a fine scholar—careful, subtle, impressively insightful. But her true vocation—her calling, her mission in life—was to teach. And with the possible exception of her husband, no one ever taught more masterfully.

Amy knew that teaching was more than merely imparting information; but she did not disdain the dimensions of teaching that necessarily require the teacher to impart an accurate understanding of the facts. She recognized that sound reflection, analysis, and interpretation presuppose such an understanding, whether the subject is American history, the natural sciences, or Shakespeare’s plays and sonnets. Knowing the facts is not sufficient for intellectual achievement, but it is necessary.

What’s more, Amy possessed an infallible radar for cant. She could smell intellectual nonsense a mile away, and she would have none of it. She was among the few whose score in avoiding the foolish fads and fashions that have spread like viruses through the intellectual culture over the past fifty years was perfect. And the Lord alone knows how many students she steered away from the ditches of these fads and fashions.

Amy’s devotion to excellence in teaching was part of a larger moral vision that guided her throughout her life and shaped her character. At the core of that vision was a sense of the profound and equal dignity of the human person.

In the 1960s, the Kasses were among the young northern, often Jewish, activists who traveled to the segregated South to fight for the civil rights of their African-American fellow citizens. In 1965, they spent a month in Mississippi—the very heart of the segregationist Dixie—living with a farm family in a home without a telephone, hot water, or indoor toilet while educating, organizing, and registering black citizens to vote. The moral courage for which she and Leon would become famous in standing for intellectual integrity was foreshadowed by this act of physical courage.

As a young woman, Amy defied the bullying of racists in Mississippi. Later she would defy the bullying of those who sought to impose in the academy and the broader culture those dogmas of the left—including a reflexive anti-Americanism and hostility to the Judeo-Christian tradition—that have come to be known as “political correctness.” She always spoke the truth as she knew it, no matter the personal and professional risks and costs. She honored truth because she recognized that the nobility of human beings has something important to do with our capacities for, and natural orientation to, truth-seeking.

Amy Kass believed that a mind truly was a terrible thing to waste. And that conviction was at the core of her devotion to students—and not just those who sat in her classrooms at Johns Hopkins, St. John’s, and the University of Chicago. Much of her energy, especially in the later decades of her life, was dedicated to educating the general public, especially regarding civic matters. Often in cooperation with gifted younger scholars and protégés, such as Diana Schaub and Yuval Levin, Amy and Leon produced superb materials to help Americans of all ages and stations to understand more fully, and appreciate more deeply, American ideals and institutions. She was the best kind of patriot—one who loved her country not simply because it was hers, but because its principles, however often we Americans as a people have failed to live up to them, are true and good. Her constant endeavor was to inspire us to live up to them more perfectly.

Another of Amy’s central concerns was promoting healthy and fulfilling romantic relationships between young men and women, leading to happy, enduring marriages. She knew from personal experience just how valuable such relationships are, and she was in no doubt about how vital strong marriages are, not only to the spouses themselves, but also to their children and the whole community. She and Leon were grieved by the collapse of the marriage culture in less affluent sectors of American society and appalled by the emergence of the hook-up culture on college campuses and, increasingly, in high schools and even middle schools. Amy had no tolerance for anything coarse or predatory, anything that degraded the human spirit. That is why she could not abide racial segregation or the collapse of courtship into licentiousness.

For young women in particular, Amy Kass was a role model: an impressive thinker and teacher, a loving and devoted wife, mother, and grandmother, a courageous moral witness, a loyal and generous friend, and a patriot. For those of us who knew and loved her, beginning with the husband whose life she so profoundly enriched and whose own virtues so remarkably mirror her own, she is simply irreplaceable. It is hard to imagine life without her. But what cannot be taken from us are the lessons she taught, not only by precept, but by the splendid example of the life she led.

Robert P. George is the McCormick Professor of Jurisprudence, and the Director of the James Madison Program in American Ideals and Institutions, at Princeton University.

]]>
http://www.thepublicdiscourse.com/2015/08/15561/feed/ 0
What’s Conservative about Radical Traditionalism? http://www.thepublicdiscourse.com/2015/08/14588/ http://www.thepublicdiscourse.com/2015/08/14588/#comments Thu, 20 Aug 2015 11:00:54 +0000 http://www.thepublicdiscourse.com/?p=14588

For many years now, the litmus test of an American conservative has been whether he or she is committed to limited, constitutional government and to the proposition of the Declaration of Independence “that all men are created equal, that they are endowed by their Creator with certain unalienable rights.” Although much maligned and often misunderstood, this tradition serves a vital purpose in our republican government: it keeps conservatives united around a set of concrete political and philosophical goals that every layman can understand.

Strange to say, then, that conservatism is increasingly under assault, not from the Left, but from within. This attack is driven by false narratives that blame the Founders’ natural-law liberalism for today’s cultural and political decay. By contrast, the life and work of Frederick Douglass can serve as an alternative model for the conservative movement—a way of upholding natural-law liberalism, and yet remaining introspective about our nation’s origins and future.

The Growing Allure of Decline Narratives

Evidence of uneasiness with the liberal tradition of the American Founding has recently crept into some conservative intellectual publications. A 2015 First Things article by “radical Catholic” scholar Michael Hanby called for conservatives of all stripes to engage in a “fundamental, ontological critique” of the principles upon which America was founded. This followed Patrick Deneen’s 2012 castigation of American conservatism for being “itself a species of [the] liberalism” destroying Western civilization. Similar claims have been made, of late, by those Neo-Calvinists with “Dominionist” or “Reconstructionist” leanings, such as Gary North and the late R.J. Rushdoony. In 1990, Richard John Neuhaus wrote that this group was disproportionately influential. Twenty-five years later, it continues to grow and spread its message that natural-law liberalism is a hopelessly outdated relic of Enlightenment thought, and must be replaced by a kind of Biblical covenantalism.

Also responsible for this sea change in conservative thought are popular writers at sites such as The American Conservative and Front Porch Republic, whose understandable emphasis on “local” solutions is often accompanied by a wholesale rejection of the modern order. Although these writers often disagree with “radical Catholics” and “Reconstructionists,” what they have in common with those groups is far more significant. All are deeply skeptical about the “self-evident truths” that America is based on. They reject the idea that governments are founded to secure natural rights. Instead, they look back to an older, supposedly more virtuous tradition than the Founding, a tradition innocent of modernity’s “conceits”—especially the conceit that church and state can be separated.

While there are many reasons for the growing appeal of “radical traditionalism” (by which I mean all those to whom modernity represents an unprecedented break from the Western tradition), the most important seems to be the Obama administration’s social extremism. When federal officials began using the language of “individual rights” and “personal freedom” to advance their agenda of sexual autonomy, conservatives balked. If the language of the Founding could be marshaled to the support of abortion, contraception mandates, and same-sex marriage, then perhaps there was something wrong with the Founders’ principles from the very beginning. Did the Constitution, grounding the American regime on “religiously neutral” ideas such as natural rights, unwittingly make today’s moral anarchy inevitable?

Problems with the Decline Narrative

This tale of a long, inevitable decline from natural-law liberalism into modern libertinism has proven extremely alluring to some, but it is deeply flawed.

Reinhold Niebuhr once observed that modern man “always imagines himself betrayed [into the defiance of nature’s laws] by some accidental corruption in his past history or by some sloth of reason. Hence he hopes for redemption, either through a program of social reorganization or by some scheme of education.” The radical traditionalist falls into this same trap. He fixates on how the serpent got into Paradise, and seeks social redemption in the revival of localism, for example, or the return to older philosophies of education. He forgets that even a perfect culture would, at last, decay; that the first decline and fall happened not in a city, but a Garden.

The decline narrative is also guilty of “presentism.” That is, instead of viewing our times in the light of history, it views history in the light of our times. Looking back over the past, it picks and chooses events that conform to a preconceived notion of “what went wrong." History simply does not work this way. Nations are shaped by great statesmen and the force of unforeseen events as much as by the philosophical DNA of their founding. If the triumph of selfishness and unrestrained democracy were inevitable from the beginning, what are we to make of the fact that Americans voted to curtail democratic excesses in 1787? How do we account for their vote in 1860 against the selfish doctrine of popular sovereignty?

There are also factual problems with this approach to the past. In patching together a dramatic storyline, decline narratives suggest continuities where there are none. Progressivism, as it turns out, is in no way the natural outgrowth of the Founders’ liberalism. In fact, Woodrow Wilson and other Progressives of the late nineteenth century systematically confronted, and rejected, the Founders’ concepts of natural law, natural rights, and the social contract. “No doubt we are meant to have liberty,” Wilson remarked, “but each generation must form its own conception of what that liberty is.” For Wilson’s heirs, this eventually meant the exaltation of positive rights over moral duties. Officials in the Obama administration today may still speak in the language of “liberty,” “rights” and “equality,” but they are using these terms as the Progressives and their heirs understood them, not as the Founders did.

Finally, because they read America’s decline as a unique, philosophical phenomenon, rather than an inherent part of human civilization in all times and places, radical traditionalists often suffer from political myopia. Sometimes this expresses itself in romantic longing for the past—a premodern age free from capitalism and liberalism, whether a medieval monarchy, Calvinist Geneva, or some kind of agrarian commonwealth. The problem, of course, is that the modern order was born precisely because life in these societies was so intolerable to people of conscience. If we hope to avoid religious oppression, surely this is an odd way of going about it.

At other times, this myopia expresses itself in blindness to the contemporary political context. By focusing inordinately on this nation’s failings, radical traditionalism loses all sense of proportion, of political reality. Under the spell of stories about America’s fall from grace, one can easily forget that the specks in others nation’s eyes are motes, too, when viewed from within. As it turns out, Spain, France, and many European countries that have (or, until recently, had) established religions and supposedly more vibrant traditional cultures made same-sex marriage socially acceptable well before many American states. How do we account for this fact if the decline narrative is accurate?

The Conservative Alternative to Radical Traditionalism

Frederick Douglass’s life illustrates a conservative alternative to radical traditionalism—an alternative that allows us to be honest about America’s failures, without confusing every failure of practice for a failure of principle.

Although born a slave and raised in the midst of far greater persecution than any of us is likely ever to know, Douglass became a champion of the Constitution in his later years. It was for this reason that he broke with many of his abolitionist friends, including William Lloyd Garrison. In their zeal to remain unstained by slavery, these radical abolitionists had accepted the Dred Scott narrative of the Founding: the Constitution was written by slaveholders and was intended for the government of whites only. Garrison and his friends concluded from this that the only way to purge the nation of slavery was to abandon the Constitution (which Garrison called a “devil’s pact”) and its principles.

Despite the fact that he himself had suffered as a slave under the Constitution and had criticized the American Experiment for its double standard, Douglass saw through this flagrant mischaracterization of the Founding. He contended that Garrison’s strategy was self-defeating for two reasons. First, it distracted abolitionists from the true causes of slavery:

Those . . .  who [deal] blows upon the Union in the belief that they are killing slavery, are . . . woefully mistaken. They are fighting a dead form instead of a living . . . reality. It is . . . not because of the peculiar character of our Constitution that we have slavery, but the wicked pride, love of power, and selfish perverseness of the American people.

Second, Douglass believed abolitionism would be left defenseless if it yielded America’s tradition of liberal constitutionalism to its foes. The solution for Americans’ “selfish perverseness,” Douglass believed, lay in upholding the inseparable principles of rule under law, equality, and God-given rights and duties, not abandoning them. In his mind, these had not been tried and found wanting, they had been found difficult and not tried:

All I ask of the American people is that they live up to the Constitution, adopt its principles . . . and enforce its provisions. When this is done, the wounds of my bleeding people will be healed . . . and liberty, the glorious birthright of our common humanity, will become the inheritance of all the inhabitants of this highly favored country.

Not the Easy Path, but the Right One

Please do not mistake my point: our nation’s origins should not be above scrutiny. One of America’s greatest strengths is that it was forged out of an intense, nationwide debate. Searching for answers to why and how this country “went wrong” is a natural reaction to our cultural crisis. As we struggle to protect our own families and communities, save the unborn and defend marriage, radical solutions become tempting. Rather than throwing ourselves back into the exhausting work of maintaining freedom, the radical traditionalist asks, why not simply begin again? But the easy path is rarely the right one. This is the paradox of radical traditionalism: In order to build, it must first destroy; seeking to cure present errors, it repeats past ones; decrying license, it destroys liberty.

Although he refused to overlook America’s failings, Douglass recognized that the philosophy and institutions of a liberalism under God, though often honored in the breach, provided the surest foundation for the eventual flourishing of his fellow slaves. He foresaw that abandoning the Constitution would not mean the end of evil—it would only mean that evil would take on a more pernicious form, free to spread without check.

We cannot afford to forsake our nation’s principles at precisely the moment our culture has most need of them. Properly understood, the principle of equality under the “Laws of Nature and of Nature’s God,” gives conservatives the perfect weapon for exposing—in language the average American understands—the inequalities inherent in abortion and the unnaturalness of same-sex marriage. It is true that the work ahead of us promises to be daunting, but this is nothing new. Free governments have always asked much of their citizens. As Daniel Webster once observed, “God grants liberty only to those who love it, and are always ready to guard and defend it.”

Nathan Gill is a candidate for a PhD in politics from Hillsdale College’s VanAndel Graduate School of Statesmanship.

]]>
http://www.thepublicdiscourse.com/2015/08/14588/feed/ 0
Why the "Catholic" "Pro-Life" Case for the Bomb Fails http://www.thepublicdiscourse.com/2015/08/15543/ http://www.thepublicdiscourse.com/2015/08/15543/#comments Wed, 19 Aug 2015 11:00:11 +0000 http://www.thepublicdiscourse.com/?p=15543

The temptation to revisit past arguments is one usually best resisted. Once the lines of the dialectic have been laid down, there is often not much more to be said. So I might rightly be accused of returning one too many times to a familiar well if, having just already written another Public Discourse piece on the atomic bombings of Japan, I return to that issue one last time.

But it seems merited. In response to that piece, one reader sent a link to a recent video essay by Fr. Wilson Miscamble, with whom I have gone round on this issue before. Fr. Miscamble’s video adds no new information or arguments, beyond the striking visual of a Roman Catholic priest asserting, in his clericals, that the intentional killing of thousands of Japanese civilians was the least bad option. This despite a uniform tradition of Catholic teaching that innocents are never to be intentionally killed, regardless of the consequences.

As I noted in an earlier essay, in which I also registered my respect for Fr. Miscamble as a standard-bearer for the pro-life cause, Catholics and pro-lifers should be concerned about the integrity of their Church and cause when such a prominent defender of the innocent in one context is willing to justify their deaths in another because of the alleged value of the consequences. For the Catholic tradition and the pro-life cause more generally are steadfast in holding that the dignity of the human person neither waxes nor wanes, and that the lives of the innocent are always inviolable.

But Fr. Miscamble makes some points that have perhaps not been adequately dealt with. So I undertake here to draw attention to some claims of his that simply cannot be sustained and that, again, for the better defense of both truth and life, must be abandoned.

Fr. Miscamble closes his essay by saying that “The judgment of history is clear and unambiguous.” He then itemizes the outcomes achieved or prevented by the bombings, and concludes, “Given the alternatives, what would any moral person have done in Truman’s position?” The ending echoes the charges made at the beginning of the video, that those who “condemn” Truman do so on the basis of “limited historical knowledge.”

So the problem, and the solution, are to be found in historical knowledge. Here Miscamble echoes, perhaps deliberately, a seemingly parallel claim made by pro-lifers (including, most recently, Senator Marco Rubio). On the matter of abortion, and embryo destructive research, pro-life advocates of the unborn often advert to the importance of science and scientific knowledge. We, the pro-life cause, take ourselves to be at an advantage over the pro-choice side precisely because we acknowledge what scientists recognize: that the lives of individual human beings begin at fertilization, when an oocyte is penetrated by a sperm cell, both cease to exist, and a new single-celled organism comes into existence.

Some extraordinarily ignorant claims made by Senator Rubio’s critics aside, this is the truth—the scientific truth. And if that truth plays such a central role in the pro-life argument, then surely historical truth should play an equally central role in the argument about the atomic bombings. And that truth—the historical truth—may well be, as Fr. Miscamble holds, that the bombing saved a million or more lives.

Let us grant, provisionally, that this is indeed the historical truth. And let us return to the case of the human embryo or fetus, likewise conceding, as many pro-choice thinkers do, that this entity is indeed a human being. That matter of fact simply does not settle the issue at the end of the day. For, as Pat Lee, Robby George, and I explained yesterday at Public Discourse, what is needed for any moral argument, in addition to an awareness of the facts of the case, is also an adequate grasp of the moral norms that must be brought to bear in cases of this sort.

With the Catholic Church, and many thinkers in the natural law and just war traditions, I have proposed and defended the norm that “no innocent person may be intentionally killed.” I have defended this norm as a moral truth accessible by human reason which should govern the conduct all human persons.

Human persons are, as Pope Saint John Paul II put it in his encyclical Evangelium Vitae, of incomparable worth, and beings with profound dignity. This “sacredness of life gives rise to its inviolability.” Moreover, the fundamental moral norm for Christians is that they must love their neighbor as themselves. Thus, the Pope concluded: “by the authority which Christ conferred upon Peter and his Successors, and in communion with the Bishops of the Catholic Church, I confirm that the direct and voluntary killing of an innocent human being is always gravely immoral.”

Every true defender of the unborn accepts this norm. And when taken in conjunction with the scientific truth that the human embryo or fetus is a human being, this norm leads to the conclusion that direct and voluntary killing of the unborn human being is always gravely immoral.

Those who fail to recognize the truth of this norm will, of course, judge otherwise. Some think that the norm is: act so as to maximize beneficial consequences. This norm is not workable, both because consequences cannot be clearly known, and because there is no intelligible sense to the expression “greatest good,” which is required for the norm’s application. Adherents of this norm, which is plainly part of neither the Catholic nor the natural law and just war traditions, will be in no position to recognize that the direct killing of the innocent is always and everywhere wrong.

Now return to the disputed question of the bombings of Hiroshima and Nagaskai (and before them, the fire bombings of Tokyo, of Dresden, and of other civilian centers) chosen in part because of their military significance, but also in part, and unquestionably, because of the high density of civilian population in the areas surrounding properly military targets. The targets were, as Secretary of War Stimson was later to describe them, “dual”: military and civilian. Let us take this too as a matter of fact; no recent participant in the debates over Hiroshima and Nagasaki has offered any plausible grounds to deny it, as far as I can tell.

And let us again grant that it is a matter of historical truth (though there are some grounds for doubt about this) that many more lives, American and perhaps Japanese as well, were saved by the intentional bombing of these dual, military and civilian, targets. Do the historical facts give us any grounds at all to conclude that “any moral person” would, like Truman, have approved the bombing mission?

Clearly not. For the facts give us no moral warrant for anything, just on their own. The idea of a “verdict of history” on a moral question is an error, unless it means no more than an appeal to what people through (recent, American) history have judged as a matter of morality. But Fr. Miscamble surely does not want to appeal merely to (recent, American) popular prejudice for moral verdicts.

It would be a far more interesting question to ask Fr. Miscamble this: given the facts we have granted, what conclusion is warranted by the norm, which he surely accepts as an underpinning of the pro-life cause, and which was solemnly affirmed by St. John Paul in his encyclical Evangelium Vitae, that the “direct and voluntary killing of an innocent human being is always gravely immoral”?

The answer should be clear: given that norm, and the fact that the intention included death for enough citizens to make “a profound psychological impression on as many of the inhabitants as possible,” then the bombing of Hiroshima and Nagasaki must be judged gravely immoral.

Does that mean that America was stuck with only bad options? It is an unfortunate fact that sometimes unwillingness to do the wrong thing leaves us with nothing but unpleasant, indeed bad, options. Just ask St. Thomas More. But it is also a fact that often our willingness to embrace the immoral option as the quickest solution to a perceived “intractable” problem leads us to overlook more creative approaches to the difficulty, approaches not initially attractive, but superior to the one chosen precisely because they would have fallen within the boundaries of the morally permissible.

Whenever violation of a norm taught as absolute though the history of the Catholic or natural law tradition is justified by Catholics or natural lawyers with the claim that “otherwise, we would not have been able to accomplish X,” where X is some admittedly laudable goal, such as ending the war, or advancing the pro-life cause, we should be suspicious. Such claims sometimes reflect the facts, though not the moral facts, of the matter. But frequently they reflect a failure of imagination and creativity, failures obscured by commitment to the immoral solution and the advances gained by that solution.

Were a million casualties guaranteed in every possible alternative actually open to Truman? We are surely not in a position to say so. Such casualties were perhaps guaranteed by adherence to a demand for unconditional surrender. But that demand was not itself an inevitability, and was, indeed, almost certainly immoral. A country fighting in the belief that their entire political and cultural life might be dismantled in its entirety is indeed unlikely to surrender without a fight. But no one—no one—can say with certainty that Japan would not have responded appropriately to a more reasonable set of demands.

And no one—no one—who claims to be pro-life can say that women, children, the elderly, and indeed, the unborn of a nation may be legitimately targeted with death for the sake of the consequences, however beneficial. Fr. Miscamble’s video saddens me precisely to the extent that it finds any approval at all amongst members of the pro-life cause.

Christopher O. Tollefsen is College of Arts and Sciences Distinguished Professor of Philosophy at the University of South Carolina and a senior fellow of the Witherspoon Institute. He is the author of Lying and Christian Ethics (Cambridge, 2014).

]]>
http://www.thepublicdiscourse.com/2015/08/15543/feed/ 0
Marco Rubio Is Right: The Life of a New Human Being Begins at Conception http://www.thepublicdiscourse.com/2015/08/15520/ http://www.thepublicdiscourse.com/2015/08/15520/#comments Tue, 18 Aug 2015 11:00:01 +0000 http://www.thepublicdiscourse.com/?p=15520

Senator Marco Rubio is right. The life of a human being begins at conception—not at implantation, “viability,” or birth. This is a scientific fact.

It is not, as CNN journalist Chris Cuomo ignorantly insisted in a televised confrontation with Rubio, a claim of “faith” with no scientific basis. To our surprise, however, the distinguished bioethicist Arthur Caplan has intervened to try to rescue Mr. Cuomo in a fight he is losing and losing badly. According to Professor Caplan, Senator Rubio has the science wrong. But he doesn’t. And Professor Caplan fails to show that he does.

Red Herrings

Caplan first appeals to a resolution issued by the National Academy of Sciences in 1981, in response to a congressional bill asserting “that actual human life exists from conception.”

It would have been clearer to say that what begins at conception is an actual human life—the life of a human individual, a human being. The bill’s vaguer wording allowed the NAS to respond with a blatant dodge: it simply noted that “human life” (as opposed to a human life, the life of a new human individual) is passed on continually across generations. That is true—but irrelevant to the issue at hand. So is the NAS resolution’s next point: that when the embryo becomes “a person” is a philosophical (or theological) question on which science is silent.

That is also true. Science reveals empirical facts. It cannot tell us who, if anyone, is a “person,” morally speaking—which beings, if any, have fundamental dignity and basic moral rights. There are correct answers to these questions—they are not merely subjective issues—but they are not answered by application of scientific methods of inquiry. We cannot determine whether there even is such a thing as human rights, or whether slavery, or Hitler’s genocide against Jews, was morally wrong, by conducting laboratory experiments or constructing mathematical models.

What science can and does reveal is whether a human zygote, embryo, or fetus is a newly conceived human being. This was the question on which Senator Rubio appealed to science. And, contrary to Professor Caplan’s complaint, Rubio got it right. Science shows that the human zygote, embryo, and fetus, like the infant or adolescent, is indeed a human being—a living individual of the human species. And—despite its red herrings and irrelevancies—the Academy did not deny this truth. The 1981 resolution provides no support at all for Professor Caplan’s charge against Rubio.

The Scientific Facts

Readers may wonder why Professor Caplan reached back to a resolution passed by a learned society some thirty-four years ago in seeking authority to support his case against Senator Rubio. Professor Caplan remarks: “Since that time [1981], scientists and physicians have remained more or less mum on the issue of when life begins.” But that is simply incorrect. Before and after 1981, there have been countless scientific monographs and scholarly articles—in embryology, developmental biology, and genetics—explicitly affirming that a human being at the earliest stage of development comes to be at fertilization. Here are three of many, many examples:

“Human life begins at fertilization, the process during which a male gamete or sperm unites with a female gamete or oocyte (ovum) to form a single cell called a zygote. This highly specialized, totipotent cell marked the beginning of each of us as a unique individual.” “A zygote is the beginning of a new human being (i.e., an embryo).” Keith L. Moore, The Developing Human: Clinically Oriented Embryology, 7th edition. Philadelphia, PA: Saunders, 2003. pp. 16, 2.

 

“Fertilization is the process by which male and female haploid gametes (sperm and egg) unite to produce a genetically distinct individual.” Signorelli et al., Kinases, phosphatases and proteases during sperm capacitation, CELL TISSUE RES. 349(3):765 (Mar. 20, 2012)

 

“Although life is a continuous process, fertilization (which, incidentally, is not a ‘moment’) is a critical landmark because, under ordinary circumstances, a new, genetically distinct human organism is formed when the chromosomes of the male and female pronuclei blend in the oocyte” (emphasis added; Ronan O’Rahilly and Fabiola Mueller, Human Embryology and Teratology, 3rd edition. New York: John Wiley & Sons, 2000, p. 8). (Many other examples could be cited, some of which may be found here. )

That is the authority of science. On request, we can cite dozens more examples. The authorities all agree because the underlying science is clear. At fertilization a sperm (a male sex cell) unites with an oocyte (a female sex cell), each of them ceases to be, and a new entity is generated. This new entity, initially a single totipotent cell, then divides into two cells, then (asynchronously) three, then four, eight and so on, enclosed all the while by a membrane inherited from the oocyte (the zona pellucida). Together, these cells and membrane function as parts of a whole that regularly and predictably develops itself to the more mature stages of a complex human body.

From the zygote stage onward this new organism is distinct, for it grows in its own direction; it is human—obviously, given the genetic structure found in the nuclei of its cells; and it is a whole human organism—as opposed to what is functionally a part of a larger whole, such as a cell, tissue, or organ—since this organism has all of the internal resources and active disposition needed to develop itself (himself or herself) to the mature stage of a human organism. Given its genetic constitution and epigenetic structure, all this organism needs to develop to the mature stage is what human beings at any stage need, namely, a suitable environment, nutrition, and the absence of injury or disease. So it is a whole human organism—a new human individual—at the earliest stage of his or her development.

This is why it is correct to say that the developing human embryo is not “a potential human being” (whatever that might mean) but a human being with potential—the potential to develop himself or herself (sex is established from the beginning in the human) through the fetal, infant, child, and adolescent stages and into adulthood with his or her identity intact.

So the man known to the world as Arthur Caplan is the same human being—the same living individual of the human species—who was once a Columbia graduate student, and before that a gifted Brandeis undergraduate, a rambunctious teenager, a precocious toddler, a newborn infant, a seven-month old fetus, a four-week old embryo, and a newly conceived human being. He was never a sperm cell or an oocyte; those were (genetically and functionally) parts of his parents. But he was once a zygote—a single-celled human individual, distinct from the gametes whose union brought him into being.

In Search of Bright Lines

Professor Caplan advances three arguments to try to show that Senator Rubio gets the science wrong. First, he suggests that science uncovers no bright line—no discrete point at which a new life comes to be all at once. Does conception occur, he wonders, when the sperm penetrates the oocyte, or when the genes they contain mix, or when the new single genome begins to function, or . . . ? Caplan thus mentions several events at or close to the time of fertilization, claims they form a continuum, and then concludes that science shows no “clean boundaries.” But this point casts no doubt on the proposition that the life of a new human being begins at conception. Caplan’s argument merely asserts, without proving, that no discontinuity occurs at fertilization.

And the events he considers all happen within hours of when sperm meets the egg. So he gives no reason to think that human beings begin (e.g., that he himself or anyone else began) at implantation or “viability” or sentience or birth or a few months after birth when self-awareness has clearly been acquired.

We know that a discrete event, an abrupt change, does occur at fertilization. Before fertilization, there are two parental sex cells, each internally oriented toward meeting and fusing with a cell of its opposite number. (None of us was ever literally a twinkle in anyone’s eye.) Afterward, there is one entity, genetically distinct from both sex cells; and it functions as an organism, developing itself (i.e., developing by an internally directed process) toward more mature stages of human life. This marks a change in kind (and not a mere change in degree of development).

Biological entities, in other words, are defined by their characteristic behavior; and that is enough to show that what is generated at conception is a new human being. But by applying the same principle to a more detailed biological picture, we can answer the question to which Caplan shifted: Which event surrounding the meeting of sperm and egg constitutes their transformation into a zygote?

When a sperm penetrates an oocyte’s outer membrane, the resulting structure may look similar, but its characteristic behavior is opposed to that of a typical oocyte. (For one thing, it repels rather than attracts sperm.) So this penetration, when successful, is the point at which oocyte and sperm cease to be and give rise to a new entity—a new organism. This is confirmed by the fact that the new structure immediately begins to engage in activities aimed at its survival as a whole, and its organized differentiation and growth into later stages of human life. It exhibits, that is, holistic (i.e., integral) organic functioning: the behavior of a single organism. We therefore agree with those scientists who conclude that a new human organism comes to be when a sperm cell successfully penetrates an oocyte.

Some have argued (less convincingly, we think) that two sex cells have not given way to a new human being until their genetic materials have actually intertwined. Their disagreement is over at what precise point the two sex cells cease to be and a new single entity begins. And if they are right, the difference is a matter of hours, not days or weeks. Either way, a human being exists long before a woman knows she is pregnant (and even before the blastocyst stage, when cells could be extracted for research). Rubio was right.

Arguments from Mortality

Professor Caplan offers another argument, but again it is beside the point. He appeals to high estimated rates of embryonic death before implantation (and of miscarriages) to argue that “many embryos that result from conception—indeed, the majority of them—lack the capacity to become living human beings.”

Let’s sort the facts from the errors. Yes, fertilization can fail. When it does, the result isn’t a human embryo—a new individual member of the human species—but what anyone who knows anything about human embryology and teratology recognizes as a disorganized growth: a complete hydatidiform mole, for example, or a tumor. Instead of organization for growth in the direction of maturity, there is disorganization: in some cases, for instance, a random mass of skin cells and hair. The result (mole, tumor, cyst) is not a human being. No one would object to discarding it.

So from the fact that fertilization sometimes fails, nothing follows for the debate between Rubio and Cuomo (or Caplan). That fertilization can produce moles or tumors when it fails says nothing about the nature of the organized, self-directing entity that results when it succeeds.

What about Caplan’s suggestion that high mortality rates among embryos means that most of them “lack the capacity to become human beings”?

The argument rests on a basic error. It is like saying that babies suffering from genetic diseases that will cause death in infancy are not human beings because they will die before learning to walk or talk. An embryo, as opposed to a teratoma or complete hydatidiform mole, is already a human being—identifiable as such by its (genetic and epigenetic) organization and behavior now.

It is true that many human embryos die before birth from health or environmental problems. That does not mean that they were not human beings—any more than high infant mortality rates from natural causes in antiquity (and in some places, alas, even today) meant (or mean) that ill-fated newborns are not human beings.

Professor Caplan makes one more observation which he suggests undermines Senator Rubio’s claim, but it too turns out to be beside the point. There are rare cases in which twins or triplets are gestating and “one of those lives is absorbed into the body of another—fetal resorption.” This phenomenon is well-known to embryologists, developmental biologists, and anatomists, but it does not mean that the embryos that were lost by resorption were, before their deaths, something other than embryonic human beings—human individuals in the embryonic stage of development.

Imagine three human beings coming to be—however that happens in fact—and one of them then dying. If the cells of the deceased are somehow absorbed into the survivors (perhaps in connection with some biotechnological treatment), does that undermine the status of any of the three as a human being? The answer is obviously no.

Conclusion

Peter Singer, who defends the morality of abortion and even infanticide, can hardly be pinned as a pro-life propagandist. But he happily concedes—indeed, insists—that the debate about abortion is not about when a human life begins. Abortion obviously takes the life of a human being. Marco Rubio is right about that, and Chris Cuomo has no idea what he’s talking about.

The real question is whether human beings have inherent worth and dignity—and a right to life—or whether their value and right to life depends on factors such as age, size, stage of development, or physical health. Do all human beings have a right to life, or are some “not yet persons” (the unborn, the newly born), or “no longer persons” (those suffering from severe dementia or in minimally conscious states), or lifelong “non-persons” (those congenitally severely cognitively disabled)? Are all human beings equal in worth and dignity? Pro-lifers say yes. Professor Singer and other honest, informed abortion advocates say no.

Science cannot settle that dispute. It cannot tell you that it is wrong to kill the physically handicapped on the ground that they are, as the Nazis said, “useless eaters.” For that matter, it cannot tell you whether people may be enslaved or pillaged on account of their language or race.

But for those who reject sorting human beings into “superiors” and “inferiors”; for those who embrace the principle at the heart of our civilization—the equal dignity of all human beings—science can reveal something crucial indeed: namely, who is a human being. And pace Professor Caplan, Senator Rubio is on the firmest possible scientific ground when he says that science shows that the child in the womb, from the very point of successful fertilization, is indeed a human being.

Patrick Lee is the McAleer Professor of Bioethics at Franciscan University of Steubenville, Christopher O. Tollefsen is College of Arts and Sciences Distinguished Professor of Philosophy at the University of South Carolina, and Robert P. George is McCormick Professor of Jurisprudence at Princeton University.

]]>
http://www.thepublicdiscourse.com/2015/08/15520/feed/ 0
The Constitutional Powers of War and Peace http://www.thepublicdiscourse.com/2015/08/15504/ http://www.thepublicdiscourse.com/2015/08/15504/#comments Mon, 17 Aug 2015 11:00:54 +0000 http://www.thepublicdiscourse.com/?p=15504

On August 17, 1787—exactly one month before the close of the Constitutional Convention at Philadelphia—the Framers of what eventually became the US Constitution were laboring over the wording of a critical government power: the power over war, peace, and foreign affairs. In this famous debate, memorialized in James Madison’s famous Notes on the Convention, we find important lessons for today.

On that crucial August day, the Framers refined the division of the war power between Congress and the president. Congress—and not the president—was given the power to declare war. But the president—and not Congress—was left with the power to defend against attacks, to conduct war as Commander in Chief, and to make peace (by treaty or otherwise) as an aspect of his general power over foreign affairs.

Constitutional Distinctions

The precise issue on the table that morning was the provision of the draft Constitution giving Congress the power “to make war.” Madison’s notes report, first, an objection from “Mr. Pinkney” to “vesting this power in the Legislature” as its proceedings were “too slow.” Other delegates added their criticism of the ability of the legislative branch to make quick determinations.

Then Madison and Massachusetts delegate Elbridge Gerry—who, many years later, during the War of 1812, would become Madison’s vice president—proposed a subtle but important change in the language “to insert ‘declare,’ striking out ‘make’ war, leaving to the Executive the power to repel sudden attacks.” After much further discussion, the motion was adopted. Rufus King of Massachusetts contributed the observation that “make” war might be understood to include the power to “conduct” war, which was an executive function, which persuaded Connecticut to join in support of the change.

The Framers also considered, but rejected, a motion to add the words “and peace” to Congress’s power to declare war, probably because matters of truce, treaty negotiation, and foreign affairs generally were considered properly executive. (The Treaty Clause of Article II provides that the president may make treaties only with the advice and consent of the Senate, by a favorable vote of two-thirds.) It was only the power to initiate a condition of war that the delegates thought crucial to vest in the legislative branch.

Though the Framers’ Philadelphia debates are cryptic and contradictory at times, several points seem clear. First, the provision was understood to vest in Congress, and not in the president, the decision whether the nation should go to war. Second, the change from “make” to “declare” was considered an improvement because it would leave with the president the traditionally understood executive power to defend the nation against attacks, thereby providing for those situations where Congress would be too slow in acting to protect the security of the nation. Third, the change from “make” to “declare” would avoid confusion about who had the power to conduct—to execute—war. That power, all seemed to agree, was the president’s alone, both as a matter of the executive power and as reinforced by the clause empowering the president as “Commander in Chief” of the nation’s armed forces. Fourth, and finally, the power to make or declare peace—the power of diplomacy and the conduct of foreign affairs generally—appears specifically to have been withheld from Congress and left with the president.

Declaring and Conducting War in Practice

How well have recent presidents and Congresses adhered to the Constitution’s division of the powers of war and peace? Not very: Many modern presidents have asserted the power to take the nation into a state of war without prior congressional authorization.

Notorious examples include President George H.W. Bush, who famously asserted that he didn’t need permission from “some old goat in Congress” before launching the Gulf War of 1991. In the end, however, Congress did authorize the Gulf War, making it completely constitutional.

So too lawyers for the next President Bush—George W. Bush—asserted unilateral presidential war-making authority but nonetheless actually obtained full authorization from Congress. The sweeping “Authorization for Use of Military Force” of September 18, 2001, is the constitutional equivalent of a declaration of war, and a stunningly sweeping one: It authorizes “all necessary and appropriate force” against persons, nations, or organizations connected or affiliated, directly or indirectly, with the September 11 attacks—essentially al Qaeda and its allies, franchises, affiliates, and harboring nation-states. That authorization remains in force today. Bush also obtained a separate, overlapping authorization for the use of military force in Iraq.

President Barack Obama, perhaps surprisingly, has been the most flagrant in flouting the Constitution’s allocation of war powers, engaging in offensive force against Libya in 2011 without any constitutionally sufficient authorization. His lawyers defended the action on the ridiculous theory that it is not a “war” if the president doesn’t think it’s a “war,” that military action served United States interests, and that past presidents have done things such as this. (The last point is true, but irrelevant: President Truman waged the Korean War without congressional authorization. That doesn’t change the meaning of the Constitution; it simply means that the Korean War, whatever its moral merit, was unconstitutional. The same can be said for President Clinton’s months-long sustained air war in Kosovo in 1999: perhaps morally justified, but still constitutionally deficient.)

In 2013, President Obama also asserted that he did not need authorization to attack Syria’s regime for its use of chemical weapons, but then backpedaled and asked Congress for the authority he said he didn’t need, before finally abandoning the whole matter into the trusty hands of Vladimir Putin. Most recently, President Obama this past winter proposed a new authorization to use force against ISIS in Syria and Iraq. But his bewildering proposal both ignored the fact that the September 18, 2001, authorization already covered this al Qaeda spinoff and actually constituted a proposed de-authorization of military authority by limiting its length to three years. For good reason, Obama’s disingenuous de-authorization went nowhere. (He’s waging war against ISIS on the basis of the prior authorization.)

Congress has been no better, frequently ignoring the Constitution’s assignment of the war-conducting power to the president as Commander in Chief and trying to micro-manage war from the legislative sidecar. While the Constitution admits of certain legislative powers in conjunction with carrying out war, Congress cannot constitutionally employ these powers in such a fashion as to hamper the president’s sole power to decide when, how, and where to use force in an authorized military action and, further, to determine policies of engagement, capture, detention, and military punishment with respect to an enemy force or power. Congress’s power “to declare War” is an on-off switch; it is not a “dimmer switch” with which to control the Commander in Chief.

Finally, there is the power to make—or preserve—peace. Ever since President George Washington declared American neutrality in 1793 (in the then-latest war between France and Britain), it has been well established that Congress’s power to authorize war is not a power to prevent the president from declaring neutrality or to interfere in any other respect with the executive’s power to formulate and conduct the foreign affairs policies for the nation. Congress probably cannot even make the president fight a declared war against his will. And it certainly cannot prevent the president from declaring an armistice or truce. True, the president can only make a “treaty”—a formal legal arrangement that has the force of US law under the Constitution’s supremacy clause—with the Senate’s two-thirds consent. But the president may interpret, apply, and even suspend treaties, and can even enter into (nonbinding) “executive agreements” with foreign nations as part of the general executive power over foreign affairs. The “executive agreement” power is simply an application of the ordinary foreign affairs authority of the president, albeit an application with dramatically important implications.

For better or for worse, President Obama’s deal with Iran falls into this last category. It is not a treaty, so it is not binding law under the US Constitution—and it does not constrain a future administration’s decisions, other than as a matter of domestic and international politics. Congress need not approve it, and has no true constitutional power to defeat it; Congress’s only power is with respect to legislative imposition or removal of economic and trade sanctions against Iran, as part of its power to regulate international commerce. (The legislation earlier adopted to give Congress a “vote” on the Iran deal is really a vote over sanctions, and it is structured so as now to give Obama the upper hand: he ultimately gets his way unless Congress can now pass a disapproval resolution over his veto.)

In all of this, one can hear the echoes of August 17, 1787. Under the Constitution—at least as originally designed—Congress has the power to declare war, but not the power to prohibit peace or interfere with the conduct of foreign affairs. The president has the power to defend the nation by repelling attacks, and the power to conduct a war authorized by Congress as he sees fit, but no legitimate power to initiate offensive military hostilities against an enemy force or power on his own authority. Certainly not all of America’s practice for the past 228 years has conformed to the Framers’ design. But that does not mean that the meaning of the Constitution with respect to the powers of war and peace has changed. It means that the Constitution has, in several instances, been violated.

Michael Stokes Paulsen is Distinguished University Chair and Professor of Law at the University of St. Thomas in Minneapolis. Luke Paulsen is a 2014 graduate of Princeton University and a software engineer in Mountain View, California. They are co-authors of The Constitution: An Introduction (2015), recently published by Basic Books.

]]>
http://www.thepublicdiscourse.com/2015/08/15504/feed/ 0
Conservatives, Check Your Privilege http://www.thepublicdiscourse.com/2015/08/15432/ http://www.thepublicdiscourse.com/2015/08/15432/#comments Fri, 14 Aug 2015 11:00:30 +0000 http://www.thepublicdiscourse.com/?p=15432

A few weeks ago, I was driving along the northern tier of Ohio when that blasted engine light started blinking. When my van shook as I brought it to a stop, I knew it wasn’t a false alarm. I ended up pulling into a gas station outside of Sandusky, Ohio, and called AAA to arrange a tow.

Alan, the gruff and grizzled tow truck man, arrived nearly an hour later. As we pleasantly chatted during our ride to the nearest Honda dealer, it suddenly occurred to me that I might feel differently about this situation if I were a different person. Even though Alan turned out to be very kind, if I were a woman, a person of color, or a man with stereotypically “gay” mannerisms, I realized that I might not feel so comfortable—so sure that I was safe and would be treated fairly—being alone with a broken-down car in semi-rural Ohio.

This is, I think, a tiny example of what is commonly called “privilege.” I have never had to experience the world the way a woman or a person of color does, nor will I ever be able to do so. Therefore, I should take seriously the claims these others make about their experiences of our society—especially claims about hardships I will never be able fully to understand.

Is Privilege Real, or Just a PC Invention?

I suppose it’s not a coincidence that I noticed a minor instance of privilege in my life at this moment when talking about privilege—and policing discourse using privilege—is de rigeur. Whether it’s street harassment of women, the public display of the Confederate battle flag, or the onward march of the LGBT movement, “privilege” is on everyone’s lips. The tone with which we speak of privilege, though, ranges from mocking to exasperation to reverence.

The mocking tone mostly comes from conservatives who see “privilege” as a politically correct progressive invention. Conservatives see the concept being deployed in ways that always seem to militate in favor of progressive political preferences, and so we reject it outright.

But it doesn’t take much reflection or observation to realize that “privilege” describes a real phenomenon. In fact, I’d go far enough along with the privilege-whisperers to agree that the ability to go through life without noticing privilege is an instance of privilege. Taken narrowly, the idea that we cannot fully understand the life experiences of others, especially others with radically different backgrounds, is obviously correct. Taken more broadly, the idea that some groups are, on average, objectively disadvantaged in our society, whether due to historical oppression, enduring prejudice, or cultural marginalization, cannot simply be waved away—least of all by conservatives, who believe we are all embedded in a tapestry of traditions that inform our personal and communal identities.

What does it mean, though, to say that privilege is real? It means that we should take seriously the claims of others about their experiences—especially those experiences we find most difficult to comprehend. This is respect. And it means that we need to discern how to address those experiences in our personal interactions and in our politics. This is justice. To ignore those experiences is, on the other hand, to indulge a competing conservative identity politics.

False Neutrality

Properly considering privilege, or “checking” one’s privilege, however, does not neatly solve political questions, as progressives like to imagine. When the concept shifts from description of our social condition to prescription of norms of discourse, it becomes yet another attempt to hijack politics with a false neutrality.

In the wake of the Obergefell ruling, a friend took to social media to praise the decision, but also to urge same-sex marriage supporters to respect their opponents’ good will and reasonableness. He was promptly savaged from his left flank. Most notable was the admonition that his ability to see both sides of the marriage question was a manifestation of his privilege as a heterosexual cis-male, and thus must be relinquished (“checked”) out of deference for the truly unprivileged: LGBT folks whose lives will be changed by Obergefell. The fact of privilege means that only unprivileged opinions count on matters of importance to them.

This standard of discourse is supposed by its adherents to be neutral. All we have to do is perform objective, if complex, privilege arithmetic and we can discern whose opinions are controlling on any given issue. In reality, privilege arithmetic is never neutral. Preexisting moral commitments are smuggled into the equation, and “privilege” becomes a faux-neutral veneer for substantive moral claims: every hierarchy of oppressions and prejudices depends upon a hierarchy of values. “Privilege” becomes just another example of the problem of neutral rhetoric.

Consider how religious persons are treated in privilege arithmetic. In theory, every “other” gets to tell her own story—to express her own lived experience and hardship—and to have that story taken at face value. This is the respect accorded to persons of color, persons who identify with the LGBT community, and so on. But when a Barronelle Stutzman or a Melissa Klein brings her story to the table—a story of government coercion against the living out of her beliefs, and therefore of violence being done to those beliefs—is she accorded the same respect? Of course not. Her motives are questioned, her beliefs impugned, and her livelihood threatened.

It might be claimed that a privilege arithmetic that favors the same-sex couple over the florist or the baker is the objective arithmetic. But that is a value judgment. And when you consider the relative harms—religious coercion and financial ruin versus going to a new vendor—and the relative power—small businesswomen versus attorneys general—it seems to be a thin value judgment indeed.

When Is Privilege-Talk Productive?

We can acknowledge the reality of privilege but avoid using it as an intellectual façade by recognizing the conceptual weight it can bear. Specifically, taking privilege seriously allows us more perfectly to understand what is right and just in particular situations, but it cannot bear the weight of rightness and justice by itself. Consider the examples of privilege-talk I mentioned before: street harassment, Confederate flags, and LGBT issues.

Street harassment is an easy one. I hardly see it because it’s not directed at me; this is my privilege as a man. Women I know say they’re regularly made uncomfortable and frightened by street harassment. Clearly, there is no social value to cat-calling. Conservatives should recognize a symptom of destructive sexuality when they see it—and have no problem condemning it. This is not a case where we must weigh the claim of the less-privileged against a competing value; it is one where taking the “other” seriously awakens us to something harmful occurring in our midst.

Confederate flags, on the other hand, have an argument from history and heritage—at least that’s easy for me to say as a white Yankee. But white people need to trust our black neighbors when they say that they find the flags to be a reminder of oppression—especially when flown on public grounds. We ask ourselves: Is this feeling of our fellow citizens reasonable? In this instance, it is. We balance that against a heritage that is inseparable from racial oppression, we understand, and we say “take it down.”

In some respects, thinking about privilege and LGBT issues is similar to these first two. We cannot simply ignore those who claim to feel endangered or to have been harmed based on their real or perceived identification with the LGBT community. We shouldn’t abandon our critical faculties—some claims are more reasonable than others—but we should grant the benefit of the doubt. This is treating others as people, rather than as exemplars of a type or as culture war combatants.

But this approach doesn’t mean the uncritical acceptance of particular policies; privilege is morally relevant, not morally dispositive. The disturbing fact of bullying, for instance, cries out for social change that conservatives should be advocating for, but it does not necessitate a speech policy that will be used against religious students. That one group has been historically dominant over another does not justify flipping the tables; that’s not justice but vindictiveness.

The advocacy of the less privileged can awaken us to hidden wrongs and even move us to support policies we hadn’t previously considered, but it cannot justify something that is otherwise unjustifiable; then it becomes a thin cover for an opposing moral system. Thinking about privilege can help us discern among policies offering competing goods, but it cannot change what is good.

Taking privilege seriously is part of living together in a diverse society. It’s part of justice. It’s part of the truth. But it’s not dispositive—and it’s never a substitute for clear-eyed moral reasoning.

Brandon McGinley is the Director of Strategic Initiatives for the Pennsylvania Family Institute.

]]>
http://www.thepublicdiscourse.com/2015/08/15432/feed/ 0
The Stakes of Free Exercise http://www.thepublicdiscourse.com/2015/08/15439/ http://www.thepublicdiscourse.com/2015/08/15439/#comments Thu, 13 Aug 2015 11:00:48 +0000 http://www.thepublicdiscourse.com/?p=15439

A few weeks ago, in response to my critique of what I labeled “Justice Scalia’s Worst Opinion,” Matthew Franck offered the best defense he could—perhaps the best defense possible—of Justice Scalia’s awful quarter-century-old opinion in Employment Division v. Smith. Smith interpreted the Free Exercise Clause of the First Amendment narrowly, as prohibiting only laws targeted at religion or religious practice.

I embraced the view, contra Scalia, that the constitutional right to the “free exercise” of religion was an affirmative substantive right against the application of laws or policies that otherwise would have the effect of prohibiting, punishing, or penalizing sincere religious exercise—irrespective of government’s purpose in enacting them. Put colloquially, the focus is not what government targets but what it hits. Absent the most extraordinary of abuses or exceptional circumstances, authentic religious practice should prevail over the usual commands of law, where the law’s commands would prohibit free religious exercise.

That was back in April and May. A lot has changed since then: we now live in the world of Obergefell v. Hodges. The ruling is an alarm bell in the night for religious freedom. Obergefell should give cause to pause and rethink one’s views about the scope of the constitutional right to the free exercise of religion, in light of its brave new world of grim threats to free exercise posed by the Court’s decision itself.

Rather than merely attempt to rebut Franck’s excellent piece, I want to try to persuade him—and the many like-minded, judicial-restraint-focused conservatives—of the stakes of Free Exercise. My hope is that appreciation of such stakes might lead one better to appreciate the powerful original-meaning, textual-logic arguments for the broad understanding of the right to free exercise of one’s religious convictions and conscience.

Never has such reconsideration been more urgently needed. The rise of a national constitutional right to same-sex marriage is a dagger pointed at the heart of religious liberty. It is only a matter of time before “marriage equality” advocates attempt to extinguish faith-based opposition to participation or endorsement of same-sex marriage. The goal will be to run religious conviction off the field as simple illegitimate bigotry. The vehicle will be, and already is, ostensibly “neutral rules of general applicability”—the exact kind of formally neutral rules from which Smith holds the Free Exercise Clause provides no protection: “civil rights” and “anti-discrimination” laws that are general in scope and do not target religion specifically.

Now more than ever, it is important to think well and carefully about whether Smith’s rule is right. For if it is, an explicit textual right to the “free exercise” of religion supplies no defense against laws premised on protecting, furthering, and eliminating resistance to a non-textual, judicially invented “fundamental” constitutional liberty.

Who’s Afraid of the Free Exercise Clause?

The arguments for the Scalia-Franck narrow understanding of free exercise are rooted, at bottom, in a kind of legal fear—fear about the types of things that judges might do and the type of results that could obtain if religious freedom really meant that religious believers, acting in good faith, could, barring extreme cases, live and act in accordance with their beliefs, free from the application of general legal rules that prohibit their actions. Smith is less about original meaning than about such fears. Scalia’s argument was not so much that the Free Exercise Clause did not have such a broad original meaning but that it would be a bad thing if it did: it would tend toward either anarchy or unbridled judicial discretion.

That position is not a bad one: The “judicial restraint” argument is an appealing one. The “judges’ discretion must be checked” argument is a valid one. The “we would be courting anarchy” argument-from-fear is an understandable one.

But I would like to suggest two possibilities to my “restrained” friends. First, these (legitimate) fears of what judges might do with a broad Free Exercise Clause should not drive their interpretation of the meaning of the provision itself. If the correct reading of a constitutional provision is that it grants a seemingly dangerous amount of individual liberty, or entails an unwelcome judicial role, one should not therefore discard the correct reading for an incorrect one. Policy-driven interpretation is, after all, exactly what conservatives should find most objectionable. It is at the core of why the Court’s decision in Obergefell is so badly wrong. We must take great care not to commit the equal and opposite error.

At an academic conference a few years back, I advanced a strong view of the Free Exercise Clause as premised on the framing generation’s acceptance of the precedence of religious liberty to the ordinary commands of the civil state. My responding commenter, the brilliant Eugene Volokh of UCLA Law School, charged that my reading would make the Free Exercise Clause a “Super-Lochner” provision. At first, the comment cut me to the quick. No one is a more dedicated opponent of “substantive due process” than I. But I responded with a hypothetical: Suppose the Constitution really did contain something like a “Substantive Due Process Clause” of a Lochner-ish nature? What if the Constitution contained, for example, a provision that said something like “Government may make no law prohibiting the free exercise of economic freedom”?

As undesirable as such a provision might be, it would be the duty of judges to apply that provision faithfully, irrespective of its desirability, the economic consequences it would produce, or any regrettable judicial judgments its fair interpretation might entail. If the proper performance of that task produced something resembling a Lochner doctrine, then so be it. (The correct objection to Lochner is not that it involves courts in making such judgments. It is that the doctrine of substantive due process in fact has no legitimate basis in the constitutional text.)

So too with the Free Exercise Clause: If the clause is correctly read as a substantive freedom for religious exercise, it must be interpreted and applied as such. There is no “free exercise of economic freedom” clause but there is a provision saying that government may “make no law prohibiting . . . the free exercise” of religion. Religious freedom is a substantive constitutional right. Legal conservatives simply must be open to the possibility that the original textual meaning of the Free Exercise Clause confers a broad substantive immunity from the consequences of general government laws and regulations—a reading supported by the clause’s linguistic meaning, by its internal religious-political logic, and by considerable historical evidence of specific intention at the time.

Second, I submit to my restrained friends that there is new reason to reevaluate the fear-based reading of the Free Exercise Clause. We now live in a second-best legal world in which the Court itself has invented textually unsupportable “constitutional rights” that, if taken as stating a new baseline “neutral rule of general applicability,” will eviscerate freedom of religious exercise.

If the concern is over what judges might do in the name of constitutional liberties, I would gently note that we are long past that point already. Rather than dread what judges might do in the name of a broad understanding of religious free exercise, we should dread what judges have already done and might now do to this actual constitutional liberty in the name of other, made-up constitutional rights. The greater fear should be that the Free Exercise Clause will continue to be under-read, and that religious freedom will be not just under-protected but destroyed by the New Legal Order.

This is not an argument that we should read the Constitution differently than we otherwise would, in a way contrary to its original meaning, because of something else the Court has done wrong. Rather, it is an argument for thinking again, seriously, about the original meaning of the Free Exercise Clause, and for questioning old arguments in light of changed circumstances. New situations do not change the text’s meaning, but they sometimes shed new light on what that meaning is.

Franck and Candid

My original essay makes the case for Smith’s wrongness on the merits, and I won’t repeat that case here. But I will respond briefly to Matthew Franck’s critique of my critique of Smith, in light of what I have just said about the need to reappraise the stakes of getting Free Exercise right.

First, Franck says that my criticism of Justice Scalia’s opinion is overstated: “the principle of ‘no targeting of religion’ is not exactly nothing as a substantive protection,” and Scalia has written or joined excellent pro-religious freedom opinions in other cases, including Chief Justice Roberts’s magnificent majority opinion for a unanimous Court in the Hosanna-Tabor case—an opinion whose praises I have sung elsewhere.

True! Justice Scalia is unquestionably sympathetic to religious freedom. I mean to convert him, too, if I can. Scalia, more than anyone else, should be fearless in his commitment to the original meaning of a broadly worded text like the Free Exercise Clause and follow its logic where it leads him. He should not fear giving the clause its full force. Especially in light of Obergefell, the trust he expressed in Smith that all would work out well because of generous legislative accommodations of religion should give way to fresh consideration of the meaning of Free Exercise.

Second, Franck says that the broader reading of free exercise “is an effective reversal of the traditional presumption of the constitutionality of legislation.” He suggests I have “uncritically” abandoned the “posture of judicial restraint” and embraced “an activist posture” that is “most closely associated with justices such as William O. Douglas, Earl Warren, and William Brennan.” Ouch.

But I think the critique is flawed. If the original meaning of Free Exercise is as I believe it to be, it is the Free Exercise Clause that reverses the usual principles concerning the presumption of constitutionality. A law, otherwise constitutional in its general application, may not be applied in such fashion as to bar the exercise of religious faith, absent exceptional justification. If that’s what the clause actually means, it is an improper application of the notion of “judicial restraint” to give the clause less meaning than it really has. Note further that free exercise claims do not strike down the law itself; they merely forbid its application in such fashion as to prohibit the free exercise of religion.

Franck’s third claim is that my position is “question-begging.” But his actual argument here is that my view is contrary to pre-1960s precedent and would create what Scalia called an anomalous “private right to ignore generally applicable laws.” But “anomaly” here is just a meaner word for “distinctive.” If that is the charge, I will own it: the Free Exercise Clause does create a unique type of constitutional liberty—a carve-out substantive freedom that limits what government can do to interfere with religious freedom. If “anomaly” means that the Free Exercise Clause has its own distinctive, singular meaning and is not a pure overlap with Free Speech or Equal Protection, I think that that is clearly right.

Fourth, Franck thinks the historical case for free exercise exemptions dubious and points to competing scholarship in this regard. I like Michael McConnell; he likes Philip Hamburger. (Actually, I like Hamburger, too.) Fair is fair: let’s reopen this debate. But let me add one methodological caveat: the search is for the original linguistic meaning of the words of the text, in historical context, and not for anyone-in-particular’s subjective “intention” or “expectation” or “understanding.” Much of the historical debate lines up competing inferences from competing quotations from different founding-era debates and discussions. This is useful stuff, but primarily to the extent it displays the range of linguistic usage at the time. My sense is that, although this evidence cannot drive one all the way to one conclusion or another, it better supports the broad reading of Free Exercise.

Finally, Franck thinks I have misread, or over-read, Scalia’s reliance on policy considerations and social circumstances in the Smith opinion. Maybe. But for anyone who has read thirty years of Scalia opinions, it is hard not to notice that Smith leans heavily on considerations of social and judicial policy. Scalia does express the fear of judicial balancing, untoward results, and “courting anarchy.” And those fears do appear to drive the analysis, to a degree.

All of which is to repeat the proposition with which I began. In the world after Obergefell, conservatives sympathetic to religious freedom, but inclined to accept Scalia’s most doubtful constitutional opinion on “judicial restraint” grounds, should be open to reconsideration of their positions. The stakes of Free Exercise are simply too high, especially today, to rest on the dubious grounds of Smith’s reasoning.

Michael Stokes Paulsen is Distinguished University Chair & Professor of Law, at the University of St. Thomas, in Minneapolis. He is co-author, with Luke Paulsen, of The Constitution: An Introduction, just published by Basic Books.

]]>
http://www.thepublicdiscourse.com/2015/08/15439/feed/ 0
"Shut Up, Bigot!": The Intolerance of Tolerance http://www.thepublicdiscourse.com/2015/08/15398/ http://www.thepublicdiscourse.com/2015/08/15398/#comments Wed, 12 Aug 2015 11:00:39 +0000 http://www.thepublicdiscourse.com/?p=15398

America is in the midst of a raging national debate on issues surrounding sexuality and gender. If you dare to suggest that gender is determined by sex and is immutable, that same-sex sex acts are immoral, or that marriage is a permanent, exclusive union of husband and wife, then you will be called an intolerant bigot, hater, and homophobe.

Where does the charge of bigotry come from? Is it just a passing fad, a political and social tool for power and control, or do its roots go deeper?

Bigotry is defined as “intolerance toward those who hold different opinions from oneself.” Notice that bigotry is not intolerance toward the opinions or beliefs of persons other than yourself, but intolerance of the other person. Bigotry is not simply disagreeing with what someone else believes; it is an unwillingness to tolerate or accept the person who holds those beliefs.

A little reflection on this definition will reveal that the vast majority of bigotry accusations populating the internet and in public discourse are not legitimate ones. On the contrary, they are the consequence of a mistaken view of tolerance that is itself a product of a warped postmodern epistemology.

Two Views of Tolerance

Under the traditional view of tolerance, two aspects were required: first, that you respected the right of the person or individual in question to hold his beliefs and voice his opinions; and second, that you had a right to disagree with those beliefs and contest them both privately and publicly. As D.A. Carson paraphrases it in The Intolerance of Tolerance, “I disapprove of what you say, but I will defend to the death your right to say it.” You do not have to like the person with whom you disagree, but you do have to respect and tolerate his right to speak.

This conception entails tolerance toward the person while allowing intolerance toward beliefs. Since beliefs are abstract objects communicated through propositions in written or spoken language, they have no inherent dignity in themselves. It does them no harm or offense to disagree with them or offer a rebuttal. Disagreeing with or being intolerant of a belief, in this view, is fundamentally different from being intolerant or hateful toward the person who holds that belief. In other words, this definition is built on a clear and obvious distinction between a person and his beliefs.

The traditional understanding of tolerance reflects a certain epistemology: namely, that there is such a thing as truth, it can be known, and the best way to discover the truth is through debate, reflection, and investigation. The pursuit of truth requires mutual cooperation, serious consideration of opposing beliefs, and persuasion through the use of reason. Coercion, exclusion, slander, and threats of force have no place in the search for truth.

Over the course of the last century, however, the old view of tolerance has been slowly transformed. The emergent new tolerance holds that persons who are truly tolerant accept the views of others and treat these individuals fairly. The key distinction is that under the old tolerance, one would accept the existence of other views even while rejecting some views as false; but under the new tolerance, one accepts these other views. In other words, all views are seen as equally valid and true.

The new tolerance rejects “dogmatism and absolutism,” affirms that each person has the right to live by his convictions, and eschews imposing one’s views upon others. Yet underlying this view of tolerance is a fundamental contradiction. Is not this concept of tolerance being imposed on all peoples and cultures, in direct violation of one of its own tenets? And as Carson points out, “does not the assertion, ‘Tolerance . . . involves the rejection of dogmatism and absolutism’ sound a little, well . . . dogmatic and absolute?"

Therefore, despite its appeal and aplomb, the new tolerance is both intolerant and internally incoherent.

Intolerance: The Supreme Sin

A critical error of the new tolerance is that it conflates beliefs and persons. In this view, to accept divergent beliefs is to be accepting and respectful of the person who holds them; conversely, to reject a belief as untrue is thought to be a rejection of the person who holds that belief. To say, “I think your view is false,” is akin to saying something unkind and insensitive about the person with that belief.

Thus according to the new tolerance, to be intolerant toward another’s beliefs is to be intolerant toward the person. And intolerance toward persons, incidentally, is the definition of bigotry. So when traditionalists voice dissent against the array of beliefs held by sexual liberals, this is interpreted as a rejection of the people who hold those views. Thus, within the incoherent paradigm of the new tolerance, the accusation of bigotry appears justified.

For practitioners of the new tolerance, intolerance is thought to be the supreme sin because it offends and disrespects persons. No one deserves to be offended or disrespected, and such an offense is considered an assault on their very dignity as a human being. This is why the rejection of same-sex marriage, homosexual practice, and transgenderism is believed to be an attack on the dignity of people with such attractions and lifestyles. This is why Justice Kennedy, in his majority opinion in Obergefell v. Hodges, appealed repeatedly to the dignity of LGBT individuals as a basis for their inclusion in the institution of marriage (as opposed to the metaphysical nature of marriage). To exclude them would have been an intolerant act, a defacing of their human dignity, and a supreme vice.

The claims of bigotry that stem from the new tolerance are moral claims: To reject the beliefs of the new sexual mores is to be intolerant of persons and to attack their dignity, and this is wrong. It is impossible to be a virtuous citizen if you are intolerant in this manner, and unvirtuous citizens who are bigots have no place in the public square; they are to be ridiculed, excluded, and publicly shamed.

This is why the battle for religious liberty and freedom of conscience is so important. There is the very real possibility that conservative voices and freedoms will be stamped out just as racist behaviors and attitudes have been. Some individuals naively claim that Obergefell v. Hodges will have no effect on issues of religious liberty, but such views ignore the current attacks against those who hold to traditional sexual norms.

If the current view of tolerance retains its cultural grip, conservatives will be systematically discriminated against and socially ostracized. Teachers will be excluded from faculty at liberal universities or denied tenure altogether. Businesses will be forced to abide by laws that conflict with their religious beliefs and consciences. Commencement speakers and guest lecturers will be uninvited to academic events, publishing houses and journals will refuse to print certain perspectives, colleges and universities will be denied accreditation and federal funding, and on and on. In other words, while the letter of our First Amendment rights might be upheld, their spirit and practice will be rejected by the greater society that is still functioning according to the mistaken view of tolerance.

Due to such repercussions it is imperative that conservatives, libertarians, and traditionalists work together to dislodge the new view of tolerance from its cultural pedestal.

The New Tolerance’s Rotten Postmodern Foundation

The conceptual underpinnings of the new tolerance can be traced back to postmodern epistemology. Postmodernism is complex, to be sure, but at its heart it is a form of cultural relativism. It rejects metaphysical realism in favor of the claim that reality is a social construct.

Objective and universally binding truth claims are thought to be impossible.

The only way to discredit the new intolerance is by attacking the philosophical foundations of postmodern theory. Unfortunately, postmodernism has thoroughly worked itself into Western culture, shaping Western assumptions and plausibility structures. “Plausibility structures” is a phrase coined by sociologist Peter Berger, referring to structures of thought widely and unquestionably accepted throughout a given culture. They dictate what individuals in that culture will consider to be possible or impossible, plausible or implausible.

Over the past half century, the new view of tolerance has become a foundational plank in the conceptual structure of Western thought. This means that individuals who act according to the old understanding of tolerance will be met first with befuddlement, and then with scorn. The old tolerance is unrecognizable in a culture that has embraced the new vision of tolerance and adopted it as a plausibility structure.

Conservatives who dispute the views of sexual liberalism are called bigots because those who embrace the new sexual mores are beholden to the new tolerance as a plausibility structure. Postmodern liberals cannot even comprehend how one can simultaneously reject a belief and accept the person who holds it. Thus, the charges of bigotry that spew forth reveal the intellectual and interpersonal poverty and dysfunction in which these persons live.

The Way Forward

The new tolerance turns out to be just as intolerant as the intolerance it abhors. By demanding that all views be considered equally valid, it cannot tolerate the old but correct view of tolerance, and it therefore becomes the intolerance of true tolerance. In the end, tolerance itself is destroyed, yielding instead to tyranny. When this happens, the new tolerance wields the libel of bigotry in order to intimidate and silence dissenters and impose conformity.

We must challenge postmodern thought at a fundamental level and reintroduce the old vision of tolerance into society. This will be most effective if we practice the old tolerance, visibly and powerfully demonstrating that it is possible to hold to objective truths and dissenting views while being respectful and loving toward those with whom we disagree. Such interpersonal virtues are rarely seen in a culture where social media exchanges and comment threads overflow with vitriol. Only by consistently and unfailingly teaching and practicing the old tolerance—and defending its epistemological foundations—will there be any chance of overturning the new tolerance.

So what will the future of American society and culture be? Will it be a place for true tolerance, where competing ideas and visions of human flourishing are openly and respectfully debated in the public square? Or will the new tolerance create a totalitarian regime that controls both private thought and public engagement through accusations of bigotry while masquerading as enlightenment and progress?

It’s up to American citizens to decide. We must not be intimidated, and we must not be silenced, for the freedom and flourishing of an entire culture and her people are at stake.

Ben R. Crenshaw is pursuing a double MA at Denver Seminary and is a teaching fellow at the Gordon Lewis Center for Christian Thought and Culture.

]]>
http://www.thepublicdiscourse.com/2015/08/15398/feed/ 0
A Call for a Unified Party http://www.thepublicdiscourse.com/2015/08/15333/ http://www.thepublicdiscourse.com/2015/08/15333/#comments Tue, 11 Aug 2015 11:00:48 +0000 http://www.thepublicdiscourse.com/?p=15333 Obergefell decision should serve as a wake-up call to conservatives. In particular, conservatives should rethink the Republican Party platform and work to refocus the GOP around the broad theme of “nature.” ]]>

The Supreme Court’s decision in Obergefell is a harbinger not just of the end of the public policy argument over marriage, but of our time’s intellectual poverty, disregard for the Constitution, and democratic breakdown.

Looked at another way, however, the decision forces conservatives to look in the mirror, to assess the grave state to which they have fallen. Sure, the battle for conjugal marriage has been fought ably and valiantly, and there remain bright conservative politicians and commentators. Nevertheless, any candid assessment of the current political situation reveals a disastrous political losing streak for conservatives; with the Reagan Revolution now in the distant past, the Republican Party faces a grave and uncertain future.

In the wake of Obergefell, a recent New York Times article hits the mark on at least one point. The article correctly announces that the Left has won the defining culture battles of the recent decades. According to the laws of political physics, Republicans will be required to extricate themselves quietly from many socially conservative positions, lest they fail even to attempt to appeal to 51 percent of Americans. To be sure, Obergefell apparently disrupts the perceived link between corporatism and the Republican Party.

To which issues will the party flee? Three possibilities present themselves: the economy, national security, and religious liberty. Unfortunately Republicans have not succeeded in crafting persuasive visions for any of these three issue areas in the past decade, and with the Left’s recently gained momentum, this is very unlikely to change. Republicans are still—rightly or wrongly—associated with and blamed for the Great Recession and the Iraq War, and the recent failure to pass and retain a serious Religious Freedom Restoration Act in Indiana is probably indicative of future difficulties as far as religious liberty is concerned. In light of the gay rights movement’s resounding success in linking the fight for same-sex marriage to the Civil Rights Movement, it is a relatively small step to fit “inclusion” under the anti-discrimination umbrella.

Is there any hope that Republicans will be hoisted out of this political pit in the 2016 presidential race? Perhaps, but this appears highly unlikely. Of the now seventeen and counting Republican candidates who have come forward, virtually all have had palpable difficulties of one sort or another in crafting a coherent and consistent vision that can appeal to both the conservative base and the general public.

This isn’t merely incidental. The fact of the matter is that there is no clear idea on the Republican side of what holds the party together—the party is an incoherent hodgepodge of social conservatism, economic conservatism, foreign policy hawks, libertarian leaners, and evangelicals. The Democrats, on the other hand, do have a clear—even if clearly wrong—roadmap, and Hilary Clinton is poised to follow that map to the White House in 2016.

A 2016 defeat would throw the Republican Party into full crisis mode, if it doesn’t reach that point earlier. I for one have little confidence that the current Republican Party leadership will suddenly come up with a coherent and compelling case for a viable program of public policy. Entrenched interests and commitments, financial and otherwise, influence party organizations in ways that tend not to be in accord with the best thinking and strategy. The party needs a forceful nudge from its more thoughtful and less politically entrenched element. Republicans need to be shown the way to a successful future in keeping with their deep connection to the traditional American past.

The new Republican Party should maintain and highlight the connection to American founding principles that many conservatives and Republicans hold dear, and that remains politically uncontroversial among large segments of the general American public. This connection to the past should, however, also be clearly linked to an attractive vision for the future. How can this be done? I propose an emphasis on the theme of nature.

This theme has been brought to the forefront of recent American political discourse in intriguing ways. On the one hand, there is the extraordinary attention placed on the issue of climate change and environmental stewardship as one of the defining issues of our time, an assessment echoed by Pope Francis’s recent encyclical Laudato Si’. Relatedly, there is the powerful grassroots movement for more “natural,” organic, non-GMO foods. These issues are usually associated with political liberals.

On the other hand, nature as it figures in human nature has long served as the most persuasive foundation for arguments in favor of traditional conservative positions on the issues of abortion, same-sex marriage, economic freedom, and limited government. The unborn have a natural right to life; same-sex marriage is contrary to the natural law; there is a natural right to property that antedates government; and government exists to secure natural rights rather than to dole out artificial ones.

The connection between these apparently disparate kinds of “nature” may indeed seem unclear, and has been in one form or another a matter of debate among philosophers for centuries. For the purposes of electoral politics, however, this is of little moment: “nature” and what is “natural” have a rhetorical power similar to words such as “hope” and “change” that Obama has used so well in recent years. Whether “nature” expresses respect for Creation in general or specifically human moral principles in particular, it is a word with strong positive connotations, ample traditional precedent, and sufficient cognitive content to resonate with American citizens.

I suspect, moreover, that these varied invocations of the single term “nature” are not mere linguistic coincidences: As Pope Francis suggests in his encyclical, humankind’s reconnection to nature-as-it-is-given, as opposed to nature-subservient-to-our-selfish-ends, may be the most comprehensive and unifying theme of our age. An American political party that can articulate this theme in a clear and persuasive way, and connect it to public policies that in some cases cut across the usual conservative-liberal battle lines, could hope to begin to halt and eventually reverse the current trend of decline.

The new Republican Party platform organized around the theme of nature might be built on the following public policy cornerstones:

  1. Legislation reflecting the natural differences between and complementarity of men and women.
  2. Safeguarding the natural right to life of all people by restricting abortion and assisted suicide, attending to violent crime, and instituting a narrowly focused welfare system.
  3. The protection of the environment from anthropogenic destruction and pollution.
  4. Respect for the natural right of property and the economic freedom that flows from it.
  5. Support for more natural food production.
  6. Foreign policy emphasizing national security rather than foreign intervention.
  7. Protections for religious liberty.

Some of these points may seem at odds with one another—for example, points 3 and 4—but this apparent opposition stems largely from the way these issues have been framed, handled, and lobbied in the past. Once they are viewed in the new light of a common concern for nature, their potential opposition may become more manageable. With the aid of this common concern, the task of balancing divergent policy objectives becomes one that more readily admits of rational discussion. What exactly our connection to nature entails may often be unclear, especially at first; but the idea of fostering this connection through public discourse and policy is one that is capable of inspiring and uniting large swaths of the American people.

It is important that Republicans do not simply abandon the social conservatism that has hitherto distinguished them from Democrats. This is important both because there remains a sizable socially conservative portion of the electorate, and because socially conservative positions on issues such as abortion and same-sex marriage reflect timeless truths. It is to be hoped, though, that a repackaging of social conservatism that persuasively associates it with currently popular causes such as environmental concern and natural food production may breathe new life into these traditional causes.

The Republican Party desperately needs guidance through the morass of unconnected and insufficiently articulated policy commitments that have come to define it in recent decades. Conservatives need to find a way to recover what has been lost without simply returning to the past. The theme of nature allows us to draw on the arguments of the past while embracing attractive visions for the future. We would do well to turn to nature now as our forebears in the Revolution once did.

Adam Seagrave is an assistant professor of political science at Northern Illinois University. He is the author of The Foundations of Natural Morality: On the Compatibility of Natural Rights and the Natural Law and editor of Liberty and Equality: The American Conversation.

]]>
http://www.thepublicdiscourse.com/2015/08/15333/feed/ 0
Conservatism, Free Markets, and America’s Post-Obama Economy http://www.thepublicdiscourse.com/2015/08/15203/ http://www.thepublicdiscourse.com/2015/08/15203/#comments Mon, 10 Aug 2015 11:00:14 +0000 http://www.thepublicdiscourse.com/?p=15203

Regardless of who becomes America’s forty-fifth president in January 2017, the new chief executive and Congress will face considerable social, foreign policy, and economic challenges. That’s true of any president and legislature. The economic problems, however, will especially be formidable: not, we hope, because of something akin to the financial crisis that confronted President Obama upon assuming office, but because of long-term difficulties for which minor policy tweaks won’t be a sufficient response.

These circumstances will present market-inclined conservative reformers with a choice. One option is to pursue incremental improvements, such as tinkering with various regulations or mildly scaling back the rate of growth in entitlements without substantially reducing the scope and depth of government economic interventionism. The other possibility is to take a page out of the progressive playbook. This would be to pursue a more expansive agenda: one that not only pushes the American economy in a free market direction to the same degree that Franklin Roosevelt’s New Deal and Lyndon Johnson’s Great Society seriously undermined economic liberty, but that, like progressivism, also makes a point of underscoring the values that must be promoted and embraced if such transformations are to last.

The Challenge

The negative first-quarter of growth recorded in 2015 was a pertinent reminder that, for all the improvements since the nadir of 2008, America’s economy is not in good shape. Many commentators have expressed concerns about the size of America’s public debt and the political difficulties associated with reducing it. Marshaling coalitions to oppose reductions in the biggest federal government spending items—i.e., welfare programs—that consistently cause expenditures to exceed revenues is not hard. Conversely, those who might be expected to support spending reductions are a more politically diffuse, less easily mobilized group.

Another challenge, stated in the 2015 Index of Economic Freedom, is “a 1.6-point decline in overall economic freedom [in America] over the past five years [that] reflects broad-based deteriorations in key policy areas, particularly those related to upholding the rule of law and limited government.” What the Irish entrepreneur Declan Ganley aptly calls “crony corporatism” has also worsened in America insofar as, according to the Index’s editors, “High levels of government spending and the expansion and complexity of the government’s regulatory agenda have increased opportunities for political favoritism and cronyism.” Overcoming this problem is especially hard when so many interest groups across the political spectrum benefit from such arrangements.

Employment presents its own complications. Though the official national unemployment rate has declined from 10 percent in October 2009 to 5.5 percent in May this year, the overall employment situation is best described as mixed. Long-term unemployment, for instance, constitutes a larger share of unemployment than during that which followed any previous recession. Closely related to this is minimal wage growth. As a recent Economist article stated, “Inflation-adjusted wages for typical workers are stagnant. In fact, they have barely grown in the past five years; average hourly earnings rose 2 percent year-on-year in February of 2015: about the same as in February of 2010.” Hence, when American wage earners say they sense little improvement in their economic conditions, they are mirroring a reality captured in wage statistics.

America’s labor-market difficulties, however, don’t stop there. The percentage of Americans aged sixteen and over who are either employed or “actively looking for work” has steadily declined since 2001. This fall in the labor-market participation rate to 62.8 percent in April this year (lower than the rates of many other developed countries, such as Australia and Canada) owes something to demography as baby boomers retire. But as the Congressional Budget Office noted in 2014, “poor prospects in the labor market seem to have had an unusually large effect in recent years.” Some downsides of this decrease—such as productivity losses and an increased burden on welfare programs—are easily measurable. Others, such as the negative impact on people’s sense of self-worth, are harder to measure, but are no less real.

Why an Ambitious Agenda?

Clearly, our economy is hardly performing at an optimal level. Of course, a society’s well-being can’t be reduced to GDP growth. It’s also true that economic prosperity generates its own set of problems, such as lackadaisical attitudes towards the need for fiscal discipline. Nonetheless, a dynamic American economy is surely preferable to, say, France’s stagnant one, let alone those of economic dystopias such as Argentina, Greece, and Venezuela.

Doing nothing isn’t an option for American conservatives. I’d suggest, however, that the incremental approach generally followed by conservatives—which often amounts to trying to adjust, rather than override or completely dispense with, policies enacted by progressives—isn’t going to be enough either. Conservatives are instinctively wary of major upheavals. Yet if they really believe that progressive economic policies are seriously damaging the common good, they should perhaps do what progressives do: implement fundamental changes.

One example of such a reform would be dramatically simplifying the federal tax code. This currently amounts to 74,600 pages. That’s up from 60,044 pages in 2004 and 26,400 in 1984. The Internal Revenue Service may not be the government’s most popular agency, but its job is surely rendered impossible by the tax code’s sheer length and complexity. A major reason for the tax code’s size is that it’s used to promote numerous social and economic goals, ranging from making childcare affordable for more people to encouraging companies to switch to ethanol fuel. Some of these objectives may well be worthy aims, but is the tax code really the best way to try and achieve such ends? The distorting effects that such measures have upon the incentives that inform economic decisions are well established.

A drastic reform of the tax code to curb its use as a tool of social engineering should go hand-in-hand with another radical alteration. Corporate welfare has long been a feature of American economic life. Now, however, it goes far beyond direct subsidies or the workings of the Export-Import Bank. It also includes special loans and tax breaks for specific industries and even regulations designed to favor particular businesses in a given industry. Often presented and sometimes conceived as ways to help “the little guy” (such as small farmers), most forms of assistance go to well established, large corporations, as the economist Veronique de Rugy and others have illustrated.

Prioritizing an end to corporate welfare isn’t, however, just about introducing more competitive efficiencies into America’s economy. The broader impact would be considerable, just as Obamacare’s effects go far beyond healthcare issues. Purging the business and political worlds of crony corporatism would, for example, help pave the way for seriously overhauling America’s welfare state. Congressman Paul Ryan put it well when he said, “if we can’t get corporations off the dole, how in the world can we credibly reform social welfare programs?” Above all, ending corporate welfare would help restore integrity to American business and government. As de Rugy pointedly stated in testimony to Congress this year:

Government subsidies create an unhealthy—and sometimes corrupt—relationship between commercial interests and the government. The more the government has intervened in energy markets, the more lobbying activity has been generated. The more subsidies that it hands out to businesses, the more pressure policymakers face to keep the federal spigot flowing. As the number of lobbyists grow, more economic decisions are made on the basis of politics, and more resources are misallocated. And the door opens to cronyism and corruption.

Economic Policy Beyond Economics

These are just some big-ticket items that could form part of an ambitious post-Obama conservative economic agenda. Others might include redrafting the Federal Reserve’s charter so as to limit its scope to monetary stability. This would force legislators to stop relying on easy money policies to compensate for their failures to implement regulatory reforms that facilitate greater productivity and lasting employment. Another would be aggressively pursuing expansive free trade agreements with any nation willing to sign them in the face of resurgent protectionism across America.

But whatever fundamental reforms might be pursued, I don’t doubt there would be substantial electoral costs for conservatives if they were actually implemented. The retribution from powerful interest groups would be swift and lasting. Nevertheless, once weighty economic changes are institutionalized, they are hard to dislodge. That has been the lesson of the Affordable Care Act and the various legacies bequeathed by presidents Wilson, Roosevelt, and Johnson.

Solidifying such reforms, however, also means that conservatives should learn from and imitate progressives’ skill in explaining their economic reforms in terms of explicit value commitments. Progressives have long understood the efficacy of doing this. Witness the way in which they have relentlessly deployed the language of justice and equality to persuade many people of the rightness of incessant government economic intervention and expansive welfare states.

Though it often seems to be the case, progressives don’t enjoy a monopoly on narratives about the good and the economy. Yet many conservatives seem reluctant to discuss economic issues in moral terms. Crony corporatism is fundamentally unjust. A lengthy and excessively complicated tax code does undermine rule of law. So why do so many conservatives handicap themselves by relying on strictly technical arguments, mixed with occasional antiphons to negative liberty and appeals to be “pragmatic,” when addressing these matters?

The oddness of this situation is underscored by the fact that American conservatives can draw upon numerous normative resources to make their case. These range from the commercial republicanism espoused by Alexander Hamilton and other Founders, to the economic vision portrayed in Michael Novak’s The Spirit of Democratic Capitalism. Though differing in certain emphases, these sources share an understanding of commerce and the institutions that promote economic liberty as indispensable to societies that take the good life seriously.

In short, pro-market conservative reformers need to grasp what progressives have recognized for decades: that people who embrace social democratic views of economic life will become increasingly disinclined to protect the values and institutions undergirding a market economy. That’s why any serious conservative economic agenda must involve not just root-and-branch reform, but also transformation in the way that conservatives make their case for the optimal direction for economic life in a post-Obama America.

Samuel Gregg is Research Director at the Acton Institute.

]]>
http://www.thepublicdiscourse.com/2015/08/15203/feed/ 0
Doing Injustice to the Just Price http://www.thepublicdiscourse.com/2015/08/15435/ http://www.thepublicdiscourse.com/2015/08/15435/#comments Fri, 07 Aug 2015 11:00:41 +0000 http://www.thepublicdiscourse.com/?p=15435 Journal of Clinical Oncology on the just price of cancer drugs in the United States contains an odd reference to a nonexistent book by Aristotle. Unraveling the origins of this error reveals an almost farcical series of misinterpretations.]]>

Arguments from authority are generally a good thing. If claims come from people with a few letters after their names, it’s often safe to bet that those claims are backed up by years of invested study and expertise, especially when they’re published in peer-reviewed journals. Scholars want to protect the integrity and reputation of their discipline, which in theory should filter out any faulty arguments or unfounded claims long before they reach the public eye. But when scholars speak outside their sphere of proper authority, that system can fail spectacularly—hilariously, even.

Earlier this summer, I came across an article published in the Journal of Clinical Oncology entitled “Cancer Drugs in the United States: Justum Pretium—The Just Price.” As a student of economics with an interest in the history of just price doctrine, I found myself intrigued by the promise of a thoughtful application of the writings of Thomas Aquinas and other Scholastics to contemporary questions in the healthcare market. The historian of economic thought Raymond de Roover succinctly defines the just price:

According to the majority of the doctors [of the church], the just price did not correspond to the cost of production as determined by the producer’s social status, but was simply the current market price, with this important reservation: in cases of collusion or emergency, the public authorities retained the right to interfere and to impose a fair price.

With this understanding in mind, I expected an examination of potential collusion among pharmaceutical companies, or perhaps an extension of this “emergency” exception to life-threatening illnesses. What I got, however, was something I never thought I would see in a refereed journal.

Although the article bears the words justum pretium in its title, the phrase only appears once, in a paragraph that I had to reread several times for the absurdity of it to fully sink in:

Aristotle is credited to be the first to discuss the relationship between price and worth in his book Justum Pretium—the just price. Sixteen centuries later, Saint Albert the Great and Saint Thomas Aquinas refined Aristotle’s argument. Their conclusion: of moral necessity, price must reflect worth.

Initially, I was shocked to discover that Aristotle had written an entire book devoted to the just price. I had heard of his works on politics, ethics, poetics, and logic, but a complete work devoted to economics was news to me. Had these four medical doctors discovered a long-lost manuscript, perhaps one that had fallen behind a bookshelf in an old monastery, and thus made a ground-breaking contribution to the Aristotelian corpus (and written in Latin, no less!)?

Since Aristotle died in 322 BC, an entire prose work in Latin would predate one of the earliest surviving works of Latin literature, Plautus' Pseudolus, by over 130 years. This book Justum Pretium would not only revolutionize our understanding of early Latin, but also of the cultural network in the Mediterranean that would lead Aristotle to write an entire book in a language other than his native Greek. I desperately wanted to learn more about this book.

Unfortunately, the authors provided no footnote to support their revolutionary claim. Perhaps they assumed that knowledge of Aristotle's basic works was so common that it needed no citation. Clearly it's not common enough, else the authors would have known how outrageous their passing comment on Aristotle was. To be fair, the sentence on Aristotle contributes nothing to the actual argument of the paper, so it's understandable that it could glide under the radar of the peer-review process. But all this only raises the question: Why draw on a manifestly irrelevant and undocumented claim for the title of the paper? Perhaps the authors wished to appear sophisticated; instead, they revealed themselves as Sophists (and we all know what Aristotle thought about Sophists).

A friend alerted me to a possible source for this audaciously irrelevant claim about Aristotle: what appears to be a transcript of a public radio broadcast. As far as I can make out, this particular episode dates from January of 2013. Given the JCO article’s publication in May of 2013, it’s a plausible source for an otherwise inexplicable error. As the radio program transcript reads:

Aristotle is often credited as the first person to take an in-depth look at the relationship between price and worth, devoting an entire book of the Ethics to the justum pretium – the just price. A remarkable work in its time, the Ethics was reintroduced to the Roman Catholic Church sixteen centuries later by Saint Albert the Great and his student Saint Thomas Aquinas. Albert and Thomas in turn refined Aristotle's argument. Their conclusion? Of moral necessity, price must reflect worth.

The two passages, one from the JCO article and the other from a radio program, are strikingly similar, and the last sentence is even copied verbatim (punctuation excluded). Furthermore, one can see how “devoted an entire book of the Ethics to” could be corrupted to “wrote a book called.” It's understandable that the JCO authors would hesitate to cite a ninety-second radio blurb, but that apparently didn’t stop them from clumsily paraphrasing the content.

The radio source has its own share of problems as well. On the factual side, Aristotle did not devote an entire book of the Ethics to the just price. Instead, he handles justice in exchange under a single subheading in Book V and never conjoins the words “just” and “price,” much less their Latin equivalents. On the credentials side, the “guest scientist” on the program is not a historian, a philosopher, or an economist—he’s an industrial engineer. What does an industrial engineer know about the just price? All he needs to, apparently. Notice how he introduces and concludes the article:

As a child I once asked my father, “How much is something worth?” A practical man, he answered without hesitation: “Something is worth exactly what someone else is willing to pay for it.” . . . Apparently the great scholars who preceded my father were able to add little to his offhand insight.

All those “great scholars” (read: Aristotle, Aquinas, and the countless canonists, jurists, and theologians who came between) know little that your ordinary “practical man” does not. The high-flown efforts of theologians to determine the just price (“many solutions have been proposed”) are contrasted with the “practical” and “obvious” solution of the jurists that the just price is simply the current market price. There is only one remaining problem—he’s wrong.

The online transcript notes that the program’s material is drawn from John W. Baldwin’s 1959 essay, “The Medieval Theories of the Just Price: Romanists, Canonists, and Theologians in the Twelfth and Thirteenth Centuries.” Published in the Transactions of the American Philosophical Society, Baldwin’s article (in marked contrast to the previous two specimens) is a well-researched, thoroughgoing treatment of the history of the just price from Roman law to Thomas Aquinas. Baldwin finds that, far from “many solutions” having been proposed, a single definition of the just price has been shared at every point in its development: the current market price. Even Thomas Aquinas himself, often touted as a progenitor of Karl Marx because of his alleged “labor and expenses” theory of value, in the Summa Theologica adhered incontrovertibly to the understanding of the just price as the commonly determined market price. This comes as no surprise given that his teacher, Albert the Great, is quoted in the radio program as holding the same view, even though he was earlier mentioned as one of the high-flown theologians.

If “many solutions have been proposed,” they were certainly not proposed by any of the Romanists, canonists, and theologians in the twelfth and thirteenth centuries. But to find another proposed solution we need look no further than the JCO article with which we began. Immediately after the paragraph quoted above, the authors contrast the just price with current practice:

In the context of cancer therapy, the prices of new anticancer agents seem to be decided by pharmaceutical companies according to what the market will bear. There is little correlation between the actual efficacy of a new drug and its price, as measured by cost-efficacy (CE) ratios, prolongation of patient life in years, or quality-adjusted life-years (QALYs).

Well, yes. After all, the just price is precisely “what the market would bear,” insight that more careful attention to even the radio program, much less the Baldwin article, would have provided. They provide a supposedly incriminating “example of a drug being worth as much as it can be sold for,” which for them highlights a “disturbing trend.” Apparently the just price itself is a disturbing trend, because that is precisely how it was formulated by the medieval Canonists—as mentioned in the radio broadcast. In place of actual just price theory, the authors propose “linking price to a true measure of quality” and “encouraging prices based on real value” to be determined by an objective measure. Good luck finding that in Aristotle’s book Justum Pretium.

This is not to say, however, that the problems mentioned in the JCO article do not exist. There may very well be an element of corporate collusion that needs to be addressed. People are in fact driven to bankruptcy by paying for their cancer medications. A judicious application of just price theory to the current cancer drug market would need to examine any cases of price discrimination or taking advantage of emergency situations in light of over a thousand years of scholarly discussion. Such a treatment is the stuff of which peer-reviewed articles ought to be made, but regrettably can’t be found in the Journal of Clinical Oncology piece.

To understand just how odd this article is, consider everything we’ve seen. An article in a medical journal justifies its title with a claim that Aristotle wrote a book that he didn’t write in a language that he didn’t speak. They draw this claim from a misunderstanding of a public radio broadcast, which itself misunderstands the only scholarly source it relies on. Not only do they paraphrase and even quote this article without citing it, they proceed to ignore what it actually says about the just price and instead substitute their own definition. This can only be construed as either abject laziness or a pernicious attempt to root their claims in a historical tradition backed by some of the biggest names in philosophy while misrepresenting what they said and failing to properly credit their sources. This is not just bad scholarship—this is intellectual robbery. If just price should be related to “true value,” this article is worth less than the government bonds of Greece.

John B. Shannon is an economics and Latin major at Hillsdale College and an intern and editorial assistant at the Journal of Markets & Morality at the Acton Institute for the Study of Religion & Liberty. He wishes to thank Chris McCaffery for the link to the radio broadcast.

]]>
http://www.thepublicdiscourse.com/2015/08/15435/feed/ 0
On the Dangers of Thanking God for the Atom Bomb http://www.thepublicdiscourse.com/2015/08/15448/ http://www.thepublicdiscourse.com/2015/08/15448/#comments Thu, 06 Aug 2015 11:00:22 +0000 http://www.thepublicdiscourse.com/?p=15448

Seventy years ago today, the United States dropped an atomic bomb on Hiroshima. And earlier this week, in an op-ed for the Wall Street Journal titled “Thank God for the Atom Bomb,” Bret Stephens spoke for many in complaining that once again the anniversary of the dropping of atomic bombs over Japan will be met with “cant.” There will be, he writes, demands for an apology from the US, calls for nuclear disarmament, and handwringing over man’s inhumanity to man. Near the end of his essay he describes a "U.S. public [that] is 'consumed with guilt for sins they did not commit.'” And he proposes the expiation of all these ills: “Watch the light come on at night in Hiroshima. Note the gentleness of its culture. And thank God for the atom bomb.”

I cannot help but feel some doubt as to the accuracy of Stephens’s description. Each August I am rather struck by the vociferous support for the atomic bombings, often expressed by those with whom I share what I take to be basic pro-life commitments to the inviolability of human life. Those commitments play themselves out in resolute opposition to abortion and euthanasia. But often they seem to have no purchase on the issues of capital punishment or just conduct in warfare.

No Intentional Killing: Proposed Exceptions

Where capital punishment is concerned, the divergence is intelligible, and the position of those opposed to abortion but supportive of the state’s right to execute convicted criminals is defensible, though I think in error. For the victims of abortion are innocent of all wrong and threat. Their killing is not only a violation of the sanctity of life; it is also manifestly unjust. No victim of abortion consents to his or her own death for the sake of another. And so the wrong of intentional killing—of acting in a way directly contrary to the basic good of human life—is compounded by the wrong of acting unjustly. This is hardly surprising: most killings are surely unjust.

The case of euthanasia is a bit trickier. If we assume that some, presumably very small, percentage of those who request physician-assisted suicide (PAS) or euthanasia do so entirely voluntarily, then we must concede that no injustice is done to them if they are killed, or are helped to kill themselves, for, as Aristotle and Aquinas both held, no one suffers injustice willingly. A terrible wrong against the good of human life is still committed, of course. And, what is more important from the perspective of political morality, there is no way at all of legalizing and institutionalizing PAS or euthanasia that does not unjustly put at risk those who cannot freely consent, whether from depression or some other incapacity, and those who will be pressured unjustly by family members, doctors, or others into accepting an early death. So the practice of PAS and euthanasia is also always unjust.

The injustice in capital punishment may be reasonably denied, at least if adequate safeguards are met. But, as I have argued, it is nevertheless always wrong, for it is always wrong to intend the death of any human person, even when that death would not be unjust. So the combination of opposition to abortion and defense of the death penalty is an erroneous, but not unintelligible combination.

Killing, Both Intentional and Unjust

But defense of the bombing of Hiroshima and Nagasaki is different. There can be no doubt, nor does Stephens evince any, that the bombings were carried out with the intention of inflicting massive civilian casualties in order to demoralize Japan and break its leadership’s will. These civilians included the aged and infirm, women and children, all of whom were innocent in the relevant sense of just war doctrine—they posed no threat—and the last of whom were categorically innocent in every way. The proposal carried out in the bombings was to kill enough such persons, and show a willingness to kill more, that Japan would surrender unconditionally.

Such killing is both contra-life, in that it involved a massive attack on human beings with the intent of ending their lives; and unjust, in that it involved an attack on the lives of human beings who themselves posed no threat in any reasonable sense of the word. Its proponents even now justify it primarily, as does Stephens, not by denying the intention of killing the innocent, but by reference to casualties prevented, a consequentialist justification.

About consequentialism, much could be said. Appeal to the consequences as the sole justifying factor requires that those consequences be capable of measurement by a common standard. But there is no such standard by which the true multiplicity of consequences, including not just lives lost versus lives saved (and why should the goodness of those lives be strictly commensurable?) but also the consequences for the character of those taking, and those losing their lives may be measured. This point bears some consideration, going, as it does, to the difficulties we should find with Stephens’s concluding line.

Self-Constitution Through Choice

The choice to use atomic weapons over Japan was just that: a choice. As such, it did more than simply bring about an external state of affairs in which, possibly, though not definitely, more lives were saved than lost. Rather, that choice had lasting consequences for three groups of agents.

First, those individuals who made and carried out the choice constituted themselves as persons willing to kill innocent persons for the sake of good consequences. Unless repented of, this choice became part of the character of those agents, surely shaping their subsequent moral deliberation. Would they have stopped after two bombings, even if Japan had not surrendered? Why would they have, having now become persons willing to go this far?

Second, there was the nation over whom these agents possessed political authority. As exercising that authority, Truman chose for the United States. When Stephens speaks, albeit slightingly, of a public consumed with guilt, I think he is not far off, for through Truman and his agents, the country acted. Of course, there is a moral difference between those who acquiesced in Truman’s authoritative decision to drop the bombs, and those who did not. But Truman’s authority brought with it the possibility of self-constitution for the United States, and much of the country endorsed and internalized precisely the choice that Truman made on their behalf.

Did this self-constitution on the part of our country have its own consequences? It surely did. Stephens describes a chastened Japan, one dedicated to peace. But he passes over the subsequent history of our nation, a history that includes further acts of indiscriminate killing during the Vietnam War, a standing resolution to destroy the Soviet Union if it were first to attack us with nuclear weapons, and the eventual adoption by the nation in its domestic affairs of death as a solution to be embraced for its consequences—before birth, as in abortion or human embryo destructive research—or at the end of life, in PAS and euthanasia. These are, sadly, natural choices for a country swayed by consequentialist justifications; the way to those choices was paved by the literally catastrophic choice to destroy Japanese cities (as before them, German cities) for the sake of military gain.

And this leads to the third set of persons for whom that choice had, or may yet have, consequences: Those of us today who are faced with the question of whether to rejoice or grieve at the choices made by others now seventy years ago. For the reason I have given, grieving is the better path; in so responding, we reject the choice, and set our hearts against the killing of the innocent.

But consider instead Stephens’s suggestion: that we thank God for what was done. Is there any greater endorsement of Truman’s decision, and surer way of internalizing in our own character the choice that was made? I doubt that there is.

Caution for the Present

Over the years on Public Discourse there has probably been only one issue on which I have written that has angered those who otherwise are my pro-life friends more than the issue of killing in capital punishment and in war, and that is the issue of lying for a good cause. Currently, pro-lifers are again decrying, and rightly, the barbarism of Planned Parenthood, with their officials’ callous indifference to the real, and not merely monetary, value of the human beings destroyed in abortion. But knowledge of that indifference, and the actions rooted in it, seems to have come about once again by means of techniques that involved lies.

If so, then the actions of the Center for Medical Progress ought not to be celebrated any more than the actions of Truman and the United States in their military conduct in WWII. Celebrating either brings, for pro-lifers, a specific danger, the danger of internalizing a willingness to act contrary to basic goods, and to violate absolute norms, for the causes we believe to be most important. That is not the path to righteousness, whether as individuals or as a nation; it is not the higher ground the pro-life movement is called to occupy. We should give thanks when we are given the strength and courage to occupy that ground, not for the willingness to abandon it.

Christopher O. Tollefsen is professor of philosophy at the University of South Carolina and a senior fellow of the Witherspoon Institute. He is the author of Lying and Christian Ethics (Cambridge, 2014).

]]>
http://www.thepublicdiscourse.com/2015/08/15448/feed/ 0
For Baby, #AnotherBoy, and Millions More: Why I’m Speaking Out about Miscarriage and Abortion http://www.thepublicdiscourse.com/2015/08/15441/ http://www.thepublicdiscourse.com/2015/08/15441/#comments Wed, 05 Aug 2015 11:00:07 +0000 http://www.thepublicdiscourse.com/?p=15441

Little fingers, on a little hand, on a little arm … A leg in the bottom corner, the heart amidst the remains, but no brain—it was “blasted out with the water.”

“Was that crack the little bits of the skull?”

“Mmhmm … I just want to see one leg. Here’s a foot …”

“It’s a baby.”

And then, the now infamous declaration: “Another boy!”

These are not scenes from the latest slasher film, though graphic content warnings did precede the fourth undercover video released by the Center for Medical Progress. This is the footage of a Planned Parenthood team picking through the remains of an aborted baby to show the alleged buyer harvested body parts. This is the video that must shift the Planned Parenthood controversy away from legality to morality. The body parts of a first-trimester baby are sifted through with as much respect for life as an eighth-grader has for the frog he dissects in biology lab.

Good people know that killing and dismembering an innocent human is evil. Yet many continue to defend the funding of Planned Parenthood because they believe and perpetuate the lie that abortion is not the killing of a human. When faced with the injustice of killing an innocent life, they look away, hiding from the truth and calling death by another name.

Having carried life in my womb, I cannot look away. I cannot cloak reality in another name: Early pregnancy loss is death, and willful termination is killing.

The “Products of Conception” and our Culture of Silence

Judging by her article in The New Republic, Dr. Jen Gunter would argue that I simply fail to understand medical terms. To her, when I suffered two miscarriages, I did not lose my children; I only lost the “products of conception.”

In her view, if everyone simply accepted the “medical” terms of fetal development, we would then be able to stomach the picking apart and harvesting of organs as just another distasteful but necessary act of medical science akin to treating “broken limbs, rotting flesh, and cancers that smell.” But Dr. Gunter misses an important distinction. While all of the activities she lists are indeed gross, only abortion purposefully and expressly eliminates life.

Dr. Gunter has betrayed her Hippocratic oath. As a physician, she pledged to “remain a member of society with special obligations to all [her] fellow human beings,” yet she denies the humanity of the unborn. While in utero, she explains, the terms embryo and fetus are used; the word “baby” does not apply till birth. Thus, she argues, an abortion is not the killing of a baby.

This cold calculation is a lie. Whether embryo, fetus, baby, toddler, college student, or senior citizen, the product of a successful human conception is a human being.

But perhaps there is reason that Dr. Gunter’s lies can be perpetuated. In one way, she is right: As a culture, we fail to understand the truth about early pregnancy. The public conversation is hushed at best, leaving room for falsehoods and misconceptions. The absence of public discourse concerning early pregnancy diminishes the thing itself, leaving women confused and unprepared for the worst.

This culture of silence was challenged last August as loud whispers shot out across the internet at Jill Duggar’s announcement she was pregnant. The announcement came only a few months after she married Derick Dillard. The math just didn’t make sense: Either they were sexually active before marriage or (gasp!) Jill and Derick had “made the shocking decision to announce her pregnancy at just eight weeks.” This early announcement was so atypical that it called for follow-up articles for Jill and Derick to explain themselves. Jill told Page Six: “Understanding that the majority of miscarriages happen within the first trimester, and believing that every life is precious no matter how young, we decided to share our joyful news as soon as we could.”

Our Culture Doesn’t Understand Early Pregnancy—and Neither Did I

While Jill was announcing her pregnancy, I too was pregnant. I sat quietly admiring her bravery, still afraid to break the cultural norm. I had already lost a child to miscarriage only a few months earlier.

At that time, I did not understand the inadequacy of the public conversation concerning early pregnancy and my responsibility to be a life-affirming part of this conversation. I did not see how this gap in discourse hurt women and families. By not talking about early pregnancy, we fail to understand it. It’s easier to avoid the topic. It is easier because early pregnancy is hard to see: a woman’s body has hardly changed in appearance, she cannot feel any kicking, and the only evidence of life is a strange aversion to vegetables and a new relationship with the closest toilet.

Last September, as I read Jill Duggar’s Facebook updates about her pregnancy, I thought back to only a few months before. That April, my sister-in-law and I were pregnant together. Our due dates were only eleven days apart. After we both announced our pregnancies to our families, my head filled with hopes for the cousins we were carrying. My hopes only remained for three days before I was diagnosed with a miscarriage at 10 weeks 6 days.

Before that doctor’s visit, we had decided that we would publicly announce our pregnancy when we left. But afterwards, instead of taking cute pictures for our Facebook pages, we found ourselves telling everyone we knew about our loss. “I cannot come to work this week, I have miscarried.” “No, I cannot help you, I have a D&C that day.” Surrounded by a wonderful support system, I was affirmed in my grief. My mother, having suffered a miscarriage of her own, assured me that it was okay to grieve, for I had lost a child. Yet I still didn’t know what she meant.

Eager to move forward, I got pregnant again in July. This time, I told friends and family right away. However, because we had just moved to a new community, I kept my pregnancy a secret from my husband’s colleagues and our new friends.

At the end of September, we were diagnosed with a second miscarriage at 10 weeks 5 days. Experience now affirmed that pregnancy only leads to a couple months of horrible sickness, insufferable tiredness, and crippling food aversions, to be completed by grief and disappointment. I could not understand the simple truth: pregnancy = baby. Again, I found myself telling everyone about my loss. Keeping my pregnancy a secret did not protect me or my husband; it only robbed me of a support group for the two months I suffered pregnancy symptoms.

Laboring Through Grief

Still raw from the experience of my second miscarriage, I spent Thanksgiving with my husband's family as everyone eagerly awaited the birth of our nephew. That Friday night, as my sister-in-law labored to birth a son, I labored through grief, oppressed by death. It was this laboring that produced full awareness that my miscarriages were the deaths of my children. I returned to those three short days we celebrated the cousins we were carrying. Each had been a “product of conception”: an embryo, and then a fetus. In those moments, my sister-in-law and I were experiencing the same reality. The difference was not that at some time later in her pregnancy life had been created. The difference was that, in my pregnancy, life had ended.

Having compounded my grief from both miscarriages my thoughts suddenly were focused on our first, the one we should be holding right now—the one we called “Baby.” That night I missed Baby. I missed Baby like I had never missed Baby before. I didn’t miss the thought of Baby, the hope of Baby, or even future babies. I deeply missed Baby—and my heart cried out to hold Baby. Suddenly, I was not grieving two tragic experiences, I was grieving the lives of two children. Determined not to undermine their lives, I carefully set out to talk about miscarriage more seriously with other women and families.

Nearly a year after Duggar's announcement, in the midst of a vibrant public conversation about abortion, Mark Zuckerberg announced his wife Priscilla’s pregnancy on his Facebook page. His announcement did not include pictures of cute little baby shoes. There were no puns, riddles, or rhymes. Instead, he celebrated the life of their child and reflected on their three previous miscarriages. He noted the loneliness surrounding early pregnancy and pregnancy loss: “Most people don’t discuss miscarriages because you worry your problems will distance you or reflect upon you—as if you're defective or did something to cause this. So you struggle on your own.”

Zuckerberg then drew attention to the poor public conversation and called for something better: “In today's open and connected world, discussing these issues doesn't distance us; it brings us together. It creates understanding and tolerance, and it gives us hope.”

With an estimated one in four confirmed pregnancies ending in miscarriage, people were ready to answer his call. TODAY Parents ran an article: “Women applaud Facebook founder's call to be more open about miscarriages” and the Huffington Post cried: “Thank You, Mark and Pricilla, for Sharing about Pregnancy Loss.”

It’s Time to Talk about Early Pregnancy

Jill Duggar and Mark Zuckerburg are right; it is time to talk about early pregnancy. I will go even further: It is time to understand the loss of early pregnancy as a death and allow a space for women to grieve this loss of life. The lies perpetuated by pro-choice advocates tell families that they did not lose a child but only a “product of conception.”

This hurts not only the woman who miscarries, but the woman who finds herself in the position to abort. I second Kathryn Jean Lopez, who says to women who have experienced abortions:

I am so sorry. That we cannot go back to that moment and get you the help you needed, the information you wanted, the hope and love you craved. I am so sorry about what you’re learning now, what you’re seeing, what you’re reliving. Our culture didn’t let you mourn—we pretend you weren’t a parent, that you’re not. What’s hidden is laid bare. Know you’re loved and not judged. I’m so sorry the law let this happen. I’m so sorry we let this happen.

Our current ways of discussing early pregnancy has confused both our understanding of the woman who has miscarried and the woman who has aborted. From both women, we have taken away their ability and their right to grieve by denying the death of their child.

I beg you: Do not turn your head from death. The purposeful destruction of fetal tissue is the destruction of an innocent life. Regardless of the legalities surrounding Planned Parenthood’s participation in the sale of fetal tissue, the Center for Medical Progress has called our attention to the disgusting practice of abortion, and we must answer. When a woman is in the tragic situation of unwanted pregnancy, we have to offer her more than death. If she is considering abortion, we must meet her with something better. Death will not free her from her misery. We must meet her with life.

As Jill Duggar and my sister-in-law hold their sons, as Priscilla carries her child with hope mixed with fear, and as all the mothers who have lost early pregnancies cry out with a longing pain that pierces the heart, it is time for a cultural change. It is time we serve these women by acknowledging early pregnancy loss and affirming their motherhood. It is time to have a new conversation about life knowing the reality of death—for Baby, for #AnotherBoy, and for the millions more we’ve lost.

Emily Carrington is a housewife in Southern Michigan and a mother of two children lost in early pregnancy.

]]>
http://www.thepublicdiscourse.com/2015/08/15441/feed/ 0
Planned Inhumanities: From Roe to Obergefell http://www.thepublicdiscourse.com/2015/08/15422/ http://www.thepublicdiscourse.com/2015/08/15422/#comments Tue, 04 Aug 2015 11:00:17 +0000 http://www.thepublicdiscourse.com/?p=15422

I am, perhaps, an outlier on the current Planned Parenthood scandal. I am not shocked that high-ranking officials in an organization by that name would be caught on video speaking callously about the harvesting of fetal organs. The fact that money is exchanged, and the question of whether this constitutes a “market,” do not particularly matter to me. Well-educated people believe that “planned parenthood” can lead to a socially just world. That hubris is the main horror from which all these other abhorrent things descend.

The Monstrous Idea of “Planning”: From Roe to Obergefell

It is the “planned” part of the organization’s title that needs to be urgently criticized. What kind of society is so lacking in humanity that it thinks “parenthood”—a phenomenon responsible for, well, the perpetuation of everything social about us—can be regimented, organized, scheduled, commoditized, bought, sold, and programmed by people? And in particular, by the people running this soulless association? Stop for a moment and consider the intellectual consequences of this foundational belief that humanity can be “planned.” Such a belief means that humans can be edited and arranged, by contract if necessary. To be editable, people, particularly children, must become objects rather than subjects.

Once they become objects, children can be treated as dehumanized products in multiple ways, all bad. They can be disposed of, like integrated waste, when they are not convenient or not proceeding according to plan. Just as we recycle cans of Diet Coke and milk cartons, we can try to limit the wastefulness of our garbage by recycling the broken-down parts of people: their livers, hearts, lungs, and brains. All of this is management of objects, which costs money, so who is to say that there shouldn’t be some remuneration? Why not reimburse the people who are stuck with this waste for the cost of transporting and recycling it? Why not pay them a salary and make the salary attractive so that qualified professionals are indeed willing to take on such a ghoulish task?

The flip side of the disposable child, of course, is the child as a desired commodity. Since people can be thrown out when they are not convenient, they can also be manufactured and maintained through industrialized processes, when the natural process of lovemaking is not convenient. And alas, this leads us straight to the sublimities of Justice Kennedy’s majority opinion in Obergefell v. Hodges.

Kennedy’s opinion emphasized the constitutional right of gay couples not to be lonely. According to Kennedy, the Fourteenth Amendment assures that gay couples should be given marriage licenses lest they call out to the universe and find nobody to answer back to their emotional needs with love.

Obergefell brings Roe v. Wade to its climax because it completes the transformation of children into objects. For children will be forced to love gay adults who are not their parents. To Kennedy, gay adults have a right not to feel lonely, which includes the right to start families. In fact, he states that they have a right to “custody” and “birth certificates” (i.e., birth certificates falsified to include two same-sex parents and erase biological parents of the opposite sex). To satisfy the human right to dignity and to thwart the civil injustice of “loneliness,” children must be produced and provided to people who want them, whether or not those people conceived the child by making love.

Children not only can, but must be manufactured. The transfers of custody must generate orphans and abandoned children, paying gamete donors and surrogates to abandon and orphan their offspring, so that this new product—the loving and obedient human being—can be delivered to paying customers.

You can’t be against Roe but for Obergefell. It all goes together. The small but crucial part of the electorate—largely made up of younger Americans—who oppose abortion but support gay marriage are perilously deluded. The objectification of children through one means will lead inexorably to the objectification of children through another means. The “child as waste product” and “child as product for sale” are the same child: the dehumanized and “planned” child suited to make paying customers happy.

Killing Humanities-Based Education Opened the Door to Inhumanity

The wine-sipping doctor of Planned Parenthood didn’t come out of nowhere. This individual was dealing with people who claimed to be doing research with the fetal tissues. She was educated by a system that framed her brutal trade as not only acceptable, but just and fruitful.

Dr. Nucatola is the inevitable offspring of a society that has no way to discuss humanity, no real lens into the history of past atrocities, no true connection to all the arts and letters left by millennia of writers about what makes us human and why humanity is precious. She is the indispensable sentinel of the society and the educational system that gave us the twin disasters of abortion and gay marriage.

The shocking videos released about Planned Parenthood and fetal tissue trafficking give us a precious glimpse into our own society’s spiritual crisis. This crisis links abortion to gay marriage, third-party reproduction, and genetic engineering (originally designed for straight consumers and now increasingly fitted to the needs of gay couples). It is also connected to corruption in higher education. The importance of the academy is clear to the leftists who dominate and exploit it, but its influence is dangerously underestimated by conservatives, who often muse about its decline with harmless, detached indignation. In truth, like two other industrial crises starting with “H”—healthcare circa 2009 and housing circa 2006—the rotten foundation of our higher education system is about to crumble.

Republicans such as Scott Walker tap into conservatives’ frustrations with higher education and offer solutions like scaling back tenure and blocking the advancement of faculty bargaining units. While Walker is more humane than the vile monsters who run most of the arts and sciences these days, he is nonetheless feeding the very problem that fuels conservative frustration. Emphasizing the practicality of trade-based education at the expense of supposedly wasteful humanities programs, the Walker approach just reinforces the notion that older generations only need to teach younger ones about things that make money and satisfy consumers. This is precisely the unreflective attitude that gave us abortion and gay marriage.

On a brutally pragmatic level, stripping faculty of protections such as tenure and collective bargaining will not lead to the rooting out of junior Nucatolas. It will rather allow the liberals who dominate universities to gang up on the few conservatives who might stand for the sanctity of life. This danger is especially strong in fields such as literature, philosophy, and history, where the left is particularly emboldened to discredit the right, and where subjective evaluation criteria give them ample opportunity to do so.

Conservatives must do the difficult and tiresome work of taking back the humanities. In 1987, Allan Bloom foresaw an impending doom for the humanities before multiple forces, mostly coming from the political left, which seemed ready to take away their potential use in examining what it means to be human. As I explained in an essay for Humanum Review, Bloom was on to something. Yet even he did not foretell the vast spiritual devastation awaiting the United States and the diabolical role played by “researchers” and “experts.” The neoliberals’ domination of the humanities, which Bloom forecast as a dying “Atlantis,” has reached levels he never imagined.

Yet this does not mean that we should give up on the humanities. If you are outraged about Planned Parenthood and Obergefell, there is a battlefield where you can fight for humanity. It is time to take back higher education. Don’t give up on it when your fighting spirit is most needed.

Robert Oscar Lopez is author of Colorful Conservative: American Conversations with the Ancients from Wheatley to Whitman and editor of Jephthah’s Daughters: Innocent Casualties in the War for Family Equality.

]]>
http://www.thepublicdiscourse.com/2015/08/15422/feed/ 0
The Dark Side of Third-Party Reproduction http://www.thepublicdiscourse.com/2015/08/15413/ http://www.thepublicdiscourse.com/2015/08/15413/#comments Mon, 03 Aug 2015 11:00:18 +0000 http://www.thepublicdiscourse.com/?p=15413

I’m sure you’ve seen them in the media: attractive, well-off, smiling parents holding adorable infants created by third-party reproduction and assisted reproductive technologies (ART). Of course, the narrative goes, this development is a win-win for all. Who could object to children being created for those who through either infertility or biological sex are unable to reproduce?

But this picture hides the highly profitable fertility industry’s dirty secrets. It ignores what is required to create these children: exploitation, health endangerment, and the commodification of human life. An honest look at the facts and circumstances surrounding third-party reproduction and ART should give any thinking person pause.

The Exploitative Consequences of Egg Harvesting and Surrogacy

Third-party reproduction first of all requires the procurement of gametes, a man’s sperm and a woman’s egg. The egg is artificially inseminated, and a woman must gestate and give birth to the resulting embryo or embryos. What, in terms of chemicals and technology, is involved in obtaining the necessary human gametes? Here, biology is not exactly fair. While sperm is obtainable through the straightforward process of male ejaculation, it’s a radically different situation to obtain eggs. The egg provider must undergo weeks of painful self-injections of carcinogenic synthetic hormones and other drugs followed by surgery for egg retrieval.

Normally, a woman produces one or two eggs per month, but third-party reproduction calls for more. The object of eggsploitation is to generate as many eggs as possible at once. What is eggsploitation? Eggsploitation is the artificial procurement of an unnaturally large number of eggs—sometimes dozens—from healthy young women.

But acquiring eggs isn’t enough. You also need a womb. The surrogate mother, the woman who will gestate and give birth to the resultant embryo, must undergo a similar regimen of dangerous and painful procedures to prepare her body for implantation and gestation.

All of these procedures to which the egg provider and surrogate are subjected pose devastating short- and long-term health risks. The short-term risks include ovarian hyperstimulation syndrome (OHSS), characterized by difficulty breathing, excruciating pelvic pain, swelling of the hands and legs, severe abdominal pain and swelling, nausea, vomiting, weight gain, low urine output, and diarrhea. OHSS can be fatal. Other short-term risks are ruptured cysts, ovarian torsion, blood clots, chronic pelvic pain, premature menopause, infection, difficulty breathing, allergic reaction, bleeding, kidney failure, stroke, and even death.

The long-term risks include cancer, especially reproductive—ovarian, breast, or endometrial—cancers, and (in a sad irony) future infertility. Both surrogates and egg providers are typically given Lupron, a drug that is not approved by the FDA for fertility use (it is used to treat men with advanced prostate cancer) to produce the onset of menopause with potentially incapacitating and long-lasting effects. Lupron and Synarel are used off-label and are Category X drugs, meaning that if a woman gets pregnant while taking the drug, the fetus will be harmed. Lupron also puts women at risk for intracranial pressure.

Real People, Real Lives at Risk

It is important in any discussion of these issues not to get lost in abstraction. The new documentary Eggsploitation: Maggie’s Story, produced by the Center for Bioethics and Culture, provides a very up-close-and-personal view of what actual women are subjected to by fertility clinics and the tragic consequences that can follow egg selling.

A thirty-three-year-old woman who began selling her eggs in college, Maggie was lured by the typical combination of financial need and the desire to help someone have a child. She was paid $1,600 and went on to sell her eggs ten times. After her second or third egg sell, she had, for the first time, an abnormal Pap smear test result. The last time Maggie sold her eggs, she went for the customary physical exam, and a lump was discovered in her breast. The clinic recommended that she see one of their “associates” down the street for consultation. The “associates” dismissed the lump as probably just a cyst. Maggie went through another cycle and felt the lump grow over the next three months.

At that point, at the age of thirty-two, Maggie went to her own Primary Care Physician, who did a biopsy. She was diagnosed with stage IV breast cancer that had spread to her bones and liver. Maggie had no genetic history of cancer. Her doctors informed her that this form of cancer generally only occurs in menopausal women or those who have been pregnant three or more times.

In the film, Maggie recounts the ways in which the fertility industry emotionally manipulates naïve young women by telling them they are special, they are the “chosen” ones, they are part of a team, part of a “family.” The guilt-tripping narrative prevents women from backing out; they are made to feel bad for not wanting to help someone. This emotional blackmail exploits the sexist stereotype of women as altruistic, self-sacrificing givers whose role in life is to be of service—particularly of reproductive service—to others.

Little Oversight or Care

How can this happen? The answer is quite simple: There is virtually no regulation of the fertility industry in the United States. For this reason, it has become a popular destination for international fertility “tourism.” The American Society for Reproductive Medicine (ASRM) and the Society for Assisted Reproductive Technologies (SART) issue recommendations that are strictly voluntary and therefore unenforceable. For example, they advise that women undergo no more than six stimulated cycles, yet Maggie underwent ten.

There are no national registries to track the health of the women who sell their eggs or rent their bodies as surrogates. Once the woman has performed her function as an egg provider or “gestational carrier,” she is discarded and forgotten, even though she may suffer serious long-term health consequences.

Most concerning, there are no peer-reviewed medical research studies on the long-term health and safety effects of egg hyperstimulation or surrogacy. This makes it impossible for fertility clinics to provide adequate and accurate information to their recruits and impossible for women to give informed consent.

What about the children produced by third-party reproduction? The women used as breeders have few, if any, rights or protections, but the children have absolutely none. For the sake of donors’ privacy, the children have no right to information about their genetic history, despite obvious life-long ramifications for their health and medical care. In addition to frequently not knowing who their biological parents are, they have no way of knowing about any siblings they may have. A 2001 study in the journal Human Reproduction concluded that “Disclosure to children conceived with donor gametes should not be optional.” The study cites the strongly supportive international response to the UN Convention on the Rights of the Child (1989); it was the most rapidly signed human rights convention in UN history. One of the fundamental rights included in the convention is the right to know one’s parents. In the debate about donor/seller anonymity this has been expressed as the child’s right to know the identity of his or her genetic parents. As the study states: “Increased knowledge and a gradual shift in attitudes have enabled us to acknowledge that in our contemporary culture young people have strong moral claims to know their genetic identities. It is now time for these moral claims to be converted to legal rights.”

Surrogate births intentionally sever the natural maternal bonding that takes place during pregnancy. The Journal of Child Psychology and Psychiatry published a study in June 2013 that found that “the absence of a gestational connection to the mother may be problematic.” The study also noted that children’s problems may be underreported by the procuring parents who wish to “present their children in a positive light.” The biological link between parent and child is undeniably intimate; when severed, there are lasting repercussions for both parties. A 2013 study in Reproductive BioMedicine (http://www.rbmojournal.com/article/S1472-6483(12)00670-0/abstract) surveyed 108 parents of children conceived via egg purchase and found that 50 percent regretted using anonymous providers for these very reasons.

Biological Bonds Matter

In her book Origins: How the Nine Months Before Birth Shape the Rest of Our Lives, Annie Murphy Paul documents the emerging field of fetal origins. Over the last twenty years, scientists have improved our understanding of how experiences in utero exert lasting effects from infancy through adulthood. The research reveals that pregnancy is a crucial staging ground for our life-long health, ability, and well-being. For instance, individuals gestated during the Nazi siege of Holland in World War II continued to feel its consequences decades later. As a result of both the siege and a severe winter that resulted in famine, studies demonstrated that people whose mothers were pregnant then have higher rates of diabetes, obesity, and heart disease later in life. Their exposure to insufficient nutrition in the womb appears to have had long-lasting effects on their health. In addition, a study published in the Journal of the American Medical Association in 2005 showed that people born to women during the famine were twice as likely to develop schizophrenia. It has been found that severe maternal malnutrition can be a contributing factor in the development of schizophrenia.

Murphy Paul also learned that pregnant women who experienced the 9/11 attacks passed their trauma on to their offspring in the womb. That’s right, newborn children were born with the effects of PTSD, and these stay with the child as she or he matures. A PTSD expert, Rachel Yehuda, studied thirty-eight women who were pregnant when they were exposed to the World Trade Center attack, measuring their basal cortisol levels and those of their infants at one year of age. The women who developed PTSD following 9/11 had low levels and so did their babies. The further along in their pregnancy, the more pronounced were the effects on the children. According to Yehuda, “The particularly strong effects seen after exposure in the third trimester point to prenatal factors, rather than genetic or parenting factors, in the transmission of PTSD risk.”

The pregnant woman is not merely a source of potential harm to the fetus but a source of influence on the future child more powerful and positive than previously known. As Murphy Paul writes, “Pregnancy is not a nine-month wait for the big event of birth, but a momentous period unto itself, a cradle of individual strength and wellness and a crucible of public health and social equality.”

Knowing this, how can we permit this systematic severance of the inextricable union between a pregnant woman and the developing fetus within her? How has our society come to regard the primordial bond between mother and child as easily and inconsequentially severable? How can we accept the creation of a breeder class of marginalized women for the use of wealthy clients? How do we allow scientists and would-be parents to artificially engineer children with no concern for their innate rights to their biological parents, their identity, their health, or their future? How has it become permissible to subject human beings to painful, health-endangering, and even life-threatening procedures to fulfill the desires of those who feel entitled to a child and are wealthy enough to pay?

In a recent article, lesbian feminist Julie Bindel wrote about the international baby business that’s exploding with the marketing of surrogates to gay men. She observes:

Its accelerating use by gay couples is no victory for freedom or emancipation. On the contrary, the “gaybe revolution” has brought a disturbing slide into the brutal exploitation of women, who usually come from the developing world and often are bullied or pimped into renting their wombs to satisfy the selfish desires of wealthy Westerners. This cruelty is accompanied by epic hypocrisy. People from Europe and the United States who would shudder at the idea of involvement in human or sex trafficking are themselves indulging in a grotesque form of “reproductive trafficking.”

As society becomes ever more divided between haves and have-nots, as people from Greece to India to Mexico to the United States become more financially desperate, as corporations turn all living things—from plants and animals to human beings and the earth itself—into commodities for profit, we must ask ourselves: have we degenerated into a dystopia where the marginalized and most vulnerable are fair game for exploitation and children are products to be designed and engineered in a eugenic manifestation of narcissism?

I don’t believe that this is the kind of world that most people want to live in, at least I hope not. If this is not the kind of world you want to live in, it is incumbent upon you to take action. Join the international Stop Surrogacy Now campaign, educate and organize your local community, write letters to the editor, meet with your state and national elected representatives, picket fertility clinics, generate media attention, develop broad-based coalitions, including with those whom you may strongly disagree on other issues. The train has certainly left the station and is gaining speed but it can be stopped if we are actively committed, organized, and demand that our voices be heard. Take action now and remember—the next Maggie could be your friend, your sister, or your daughter.

Kathleen Sloan is a former member of the Board of Directors of the National Organization for Women (NOW), executive director of Connecticut NOW, a consultant on third-party reproduction issues, and co-author of the book Race and the Genetic Revolution: Science, Myth and Culture. She has a master’s degree in international relations and has traveled the world advocating women’s rights, including at the UN Human Rights Council in Geneva and the UN Commission on the Status of Women in New York.

]]>
http://www.thepublicdiscourse.com/2015/08/15413/feed/ 0
Class of ‘59: Our Kids by Robert Putnam http://www.thepublicdiscourse.com/2015/07/15269/ http://www.thepublicdiscourse.com/2015/07/15269/#comments Fri, 31 Jul 2015 11:00:05 +0000 http://www.thepublicdiscourse.com/?p=15269 Our Kids, esteemed social scientist Robert Putnam compares the conditions and opportunities of the rich and the poor in Port Clinton, Ohio, his hometown, both in 1959 and today. But the government programs that Putnam proposes won’t solve a problem that starts with the family.]]>

Social scientist Robert Putnam grew up in Norman Rockwell’s America. The Port Clinton High School class of 1959 soared above their parents’ educational levels and standards of living. In the fifties, Port Clinton was one of those places where the American Dream was confirmed. Since then, things have changed. In his new book, Our Kids: The American Dream in Crisis, Putnam, now an eminent social scientist, compares the community that begot him and his classmates to the fragmented America of 2015.

Then and Now

Port Clinton in the fifties was a land of opportunity. Frank, Port Clinton’s “rich kid,” started out doing manual labor for his father’s company and later joined the Navy. The high-school quarterback, Don, was raised in a poor family. His dad worked seventy-five-hour weeks at factories and prioritized investments—homeownership and piano lessons—over his family’s food security. Upon graduating from college, Don became a pastor and football coach.

It would be easy enough to dismiss Putnam’s recollections with a condescending smile. But Putnam is in the company of many other memorialists who confirm that mid-century America really was different. Culture was more compressed; a handful of TV stations and popular magazines depicted largely the same worldview. Hard evidence on the rate of socioeconomic mobility at the time is thin. The landmark study of mobility trends only goes back as far as people born in 1971, when Putnam was 30.

Port Clinton still exists, but it’s a Pottersville now. Its affluent families have little connection to the town. Most factory jobs are gone. Areas that were once middle class are now lower class. In Our Kids, we are introduced to David, a teenage father interviewed by Putnam and his research partner Jennifer Silva. By the ripe age of 18, David had worked at a diner and in a factory; he had landscaped and flipped burgers. But none of these jobs stuck. David’s list of anti-social accomplishments is even longer: juvenile crime, alcohol, drugs, broken probation, jail time, suffering parental neglect, and running with a bad crowd.

Putnam’s claim is that David is emblematic of today’s lower class—a lower class much larger and worse off than that of the 1950s. To help this growing disadvantaged class, Putnam prescribes various progressive government policies.

Top and Bottom

Putnam’s diagnosis echoes Charles Murray’s 2012 book Coming Apart, which dragged the ugly divergence in American lifestyles into the spotlight. Today’s college-educated people, Murray showed, still lead 1950s-style lives. They marry before having children, work full time, divorce at low rates, and attend church regularly. The less educated, however, have changed. They used to live very similar lives to the affluent and educated, just with less money. Now their families are in flux, they are less religious, and the men are less likely to work.

Putnam, often using the same sources, confirms and develops Murray’s account by emphasizing the divergence in childrearing trends. The rich are much more intentional in rearing their kids and deploy sophisticated approaches to the “rug-rat race.” But poor Americans struggle to translate good intentions for their children into educational achievement. More fundamentally, they lack the basic frame of family life and good behavior that Putnam recalls among the poor families of 1959.

In Our Kids, Putnam describes the divergence in each key area of child development: families, parenting style, schools, and community life. In each case, he tells illustrative stories of a rich and a poor family. Behind each pair of families is a forest of statistics and studies selected to support their stories. But Putnam does not introduce any new data work of his own, so his most ambitious claim—that opportunity in America has changed for the worse since the fifties—lacks rigorous evidence.

The stories are easily the most compelling part of the book. Putnam brings the families to life and is forthright, though not judgmental, about their advantages and follies. The reader finds himself rooting for the poor kids—won’t they catch a break? Maybe they won’t get rich, but will they at least find love, personal fulfillment, or a steady job?

Spoiler alert: Some do, but most don’t.

The poor kids face substantial obstacles, starting with their parents. David’s mother was “never there” and his father wound up in prison. Kayla’s birth was “kind of planned” and she entered a “confusing web” of step-siblings. Elijah’s parents were alternately absent or “punitive.” Lola and Sofia’s parents were gang members. Lisa and Amy’s fathers were substance abusers and their mother suffered from multiple sclerosis and depression. Not one of the families is remotely similar to that of Don or the other PCHS ’59 graduates whom Putnam interviewed.

Half the poor and poorly-raised millennials in Our Kids further sank their own prospects with teenage drug use and childbearing. Elijah “got high and drunk every night,” committed arson, and still, in Putnam’s estimation, “seems addicted to the adrenaline rush of violence.” It’s a testament to Putnam’s writing that Elijah comes across as likable and human.

Of Putnam’s interviewees, the one with the best chance to escape poverty is probably Sofia, a Latina in Orange County. Raised by her grandparents and older sister Lola, Sofia displayed academic aptitude early. But her American dream was foiled by “apathetic and unhelpful” public school teachers. Sofia fell too far behind the “smart kids” to be deemed worthy of serious instruction. When Putnam interviewed her, she was in community college, but remained pitifully ill-informed about the contours of the US educational system.

What about the Middle?

Our Kids, unfortunately, says very little about the middle class. The affluent families are upper-middle class at least: a few managers, a wealthy contractor, an architect, and an independent consultant. Putnam undersells the one middle-class family we meet as “working class.” The mom, Stephanie, is a retail store manager working full time, and her husband drives a forklift. Those two incomes under one roof places a household above the US median. Stephanie and her husband live in the Atlanta exurbs. Stephanie’s kids (all from her first marriage) are a mixed bag—a “golden boy,” a recent community college grad, and two “challenge children.”

Although Putnam does not emphasize the fact, Stephanie’s family is the face of today’s middle class. There are more American families like Stephanie’s than like all the other families in Our Kids combined. Perhaps the next book in this field will complement Putnam’s and Murray’s accounts, both of which focus on the top and bottom of the distribution, by examining social trends in the middle class.

Are More Government Programs the Solution?

The menu of policy options suggested in Putnam’s final chapter is stunningly disconnected from the problems he describes. Most of Putnam’s proposals recommend that the government put more money into the hands of poor people and public school teachers. Maybe that is good policy, but the stories of Our Kids, at least, comprise a persuasive argument that it is not.

In Port Clinton, 1959, children in rich and poor families were raised the same way, went to school together, and had the same range of outcomes. Government is visible in Putnam’s Port Clinton only as a school system, the military, and presumably the inspectors who shut down insalubrious company housing in which one family lived.

The nature of poverty has changed. Although Don’s father held two full-time factory jobs in the 1950s, his family often did not have enough to eat. But fifty years later, the teenaged David has enough spending money to purchase drugs and alcohol. Don had moved away by the time his family acquired a TV, while David plays video games frequently. David has gotten points on his license for speeding; Don’s family didn’t own a car. David’s failure to thrive as Don did cannot be attributed to a lack of money.

In 1959, poverty was material deprivation. For millennials, material deprivation is rare. Elijah, the grocery bagger with a hunger for violence, wears Jordans. Kayla has a TV in her bedroom. Amy and Lisa support teenage drug habits despite being on welfare. If material deprivation always closed off opportunities, the poor kids of the Class of ’59 would have had much worse outcomes than the poorly raised but adequately fed children of the millennium.

Not only were the poor millennial kids better off materially, they also encountered government representatives and programs designed specifically to help them. Kayla got attention from a school social worker, was put into a special school program, and later received Job Corps training. A librarian “helped her arrange financial aid” to a community college. She and her boyfriend live off her father’s disability benefits. David got into “a ‘career-based intervention class.’” Lisa and Amy’s family income derives from “various public welfare programs.” Amy got into a magnet school and, after bad decisions with drugs and boys, a high school for young mothers. Elijah was born into the safest and most structured environment a government can provide—an Army base in Germany—but grew up beset by crime and chaos. If twenty-first century America has failed its children, it is not for lack of government programs.

In Our Kids, the only subjects who would clearly have been aided by better academic instruction were Lola and Sofia. They were raised well and connected in their community by volunteering at an AIDS clinic. What was missing was attentive, focused teaching. But was such teaching even possible in a school where students routinely “took Ecstasy and drank” in class? Would paying teachers more or giving students’ families more money have helped? Lola and Sofia attended their terrible high school only because the other choices were too distant, yet Putnam is dismissive of school choice when he makes recommendations for education.

For the rest of the kids, better-paid teachers and free school sports simply can’t counteract the negative impact of disintegrating families.

Parenting Is A Necessity

Bizarrely, personal actions and choices that might mend the social fabric do not interest Putnam. He recommends as though individual behavior were fixed, despite his own rich portraits of lives full of momentous decisions. When he dutifully checks off the possibility of expanded local mentorship projects as a response to social disintegration, he does so in the passive voice. By contrast, the preceding paragraph contains commands: “Close this book, visit your school superintendent . . . Insist that pay-to-play [school sports] be ended.” For Putnam, money is what matters.

But, if money is not the solution, then what is? Some pessimists, such as Isabel Sawhill, pin their hopes on birth control. But Sawhill’s movement to promote teenage use of long-acting reversible contraception, and movements like it, must contend with the fact that the last major increase in contraception availability led to an historic rise in out-of-wedlock births.

As a reader, I expected that Putnam would exhort me to tutor, attend a diverse church, babysit for a single mom, move to a poorer neighborhood—to take action. After all, his fond memories of Port Clinton emphasize its warm social cohesion. Perhaps Putnam assumed the exhortation to personal action was obvious, and omitted it. If so, he missed an opportunity to turn theoretical discussions of inequality into a non-political social movement toward renewed community.

Putnam’s proposals for government transfers, better-paid teachers, and free sports teams may represent helpful stepping stones to children who are socially secure and were raised in a stable, disciplined home, as his poor classmates were. But the children of Our Kids demonstrate painfully that outside influences are too little, too late for those from broken homes.

In 1959, eight out of eight poor parents in Our Kids had been present throughout their children's lives.* In 2015, that was true of two out of twelve. Putnam does not have a plan that will help the kids whose parents have fled.

Salim Furth, Ph.D., is a research fellow in macroeconomics at the Heritage Foundation and is learning to be a father. His opinions do not necessarily reflect those of his employer.

*Correction: Due to an editorial error now corrected, this sentence as first published inaccurately stated the observations in Putnam's book.

]]>
http://www.thepublicdiscourse.com/2015/07/15269/feed/ 0
Conservatives and Transgenderism: A Response to Jennifer Gruenke http://www.thepublicdiscourse.com/2015/07/15401/ http://www.thepublicdiscourse.com/2015/07/15401/#comments Thu, 30 Jul 2015 11:00:46 +0000 http://www.thepublicdiscourse.com/?p=15401

Conservatives should think carefully about sex and gender. There is a formidable edifice of academic work on the topic, mostly being conducted in departments that conservatives (perhaps rightly) don’t take seriously and don’t care to touch. But like it or not, our culture has imbibed deeply of gender ideology. Conservatives can’t afford to be unfamiliar with the new language and its metaphysical presuppositions.

Therefore, I welcome Jennifer Gruenke’s recent essay in Public Discourse, wherein she describes the rare intersex condition “from a biological point of view” and argues that, given the scientific facts surrounding many of these cases, conservatives should take a more tempered approach toward transgenderism. As long as other possible explanations of gender dysphoria are ruled out, she argues, conservatives should give transgender people the benefit of the doubt and take their introspective reports at their word. Because there is a plausible genetic account of transgenderism, conservatives should assume that the transgender person’s professed divergence between bodily sex and reported gender is a result of some variety of intersex condition.

Unfortunately, I do not find Gruenke’s case convincing, for it relies on an unsupported assumption and does not succeed in answering a key objection.

Gruenke’s Account

Gruenke’s account is as follows. All embryos start out, in a sense, as “female.” In males, the presence of certain hormones initiates or halts the default developmental pathways. If the right hormone is not produced, or if cells become unreceptive to it, then some or all of the pathways relevant to sexual phenotype may halt or fail to initiate at all. But since different pathways are regulated by different hormones, there is a possibility of phenotypic divergence: In a fetus with XY sex chromosomes, one part of the body might “masculinize,” in accordance with the “normal” pathway, while another part of the body does not.

This latter, unmasculinized part of the body might be the brain. In this case, Gruenke suggests, the person will grow up with a “female” brain. Therefore, it’s understandable that such a person would report being a female—having a female gender identity. But such a person would look just like a transgender person: an apparent male who reports being a female. Therefore it is plausible, Gruenke argues, to suppose that transgenderism can arise as a result of a straightforward mutation.

Gruenke does not deny that, as in some cases recently recounted at Public Discourse, divergence between bodily sex and introspective report might be the result of underlying psychological trauma. In those cases, she agrees, treatment should consist of therapy, not surgery. But, she claims, there is reason to believe that psychological trauma does not explain all cases of transgenderism. Transgenderism as a variety of the intersex condition should be our default assumption, where psychological trauma is not apparent, and gender-reassignment surgery might be an appropriate corrective.

Paradigm Cases

There is an assumption implicit in Gruenke’s argument. Her account relies on paradigm cases to determine what is constitutive of maleness and femaleness: which primary and secondary sex characteristics are male and which female, what a male or female gender identity is, etc.

Gruenke writes:

I will follow the convention of using the word “sex” to refer to the sexual characteristics of the body exclusive of the brain, and “gender” to refer to the subjective, internal experience of being a male or female. Sexual characteristics are either primary or secondary. Primary sexual characteristics develop prenatally and directly relate to reproduction (for example, having testes vs. ovaries). Secondary sexual characteristics develop at puberty, and may or may not relate to reproduction.

Suppose that transgenderism is possible, and one’s gender can diverge from one’s sex. It follows that, say, some females have male reproductive organs; some females have penises. But then what makes those reproductive organs male?

One cannot avoid appealing to paradigm cases, to what usually and typically happens, where “usually” and “typically” have both descriptive and normative force. We can see this in the truth of what some philosophers have called Aristotelian categoricals. A proposition like “Dogs are four-legged” does not claim (falsely) that all dogs have four legs; nor is it so trivial as to state that some particular dog has four legs. It rather states what is normal or typical for dogs.

In the passage quoted above, Gruenke is relying on such norms. Which sex characteristics are male depends on the role that they typically play in reproduction—or the fact (if they are not directly related to reproduction) that they occur alongside male primary sex characteristics. What counts as a mutation, or an inhibited/activated male or female developmental pathway, depends on what occurs normally.

This account of sex, then, has much in common with the account of sex identity that Christopher Tollefsen recently introduced at Public Discourse. Because human beings reproduce sexually, human beings are either male or female in the typical case, and their sex corresponds with the function that their reproductive organs can play in coitus. There is no other principled way for picking out the sexes.

As Tollefsen argues, sex being so defined, it is not even possible to change one’s sex, and attempts to do so will mutilate otherwise functional organs. So long as the practice of medicine is correctly understood as the practice of restoring human bodies to their proper functioning, gender-reassignment surgeries will fall outside the domain of medicine. The conservative can happily grant Gruenke’s biological account, for the sake of argument if not because it is true—there is a fair bit of disagreement over the science and how best to interpret it, after all. But Gruenke’s account, in what it presupposes, offers only reasons to accept Tollefsen’s argument, while offering nothing to resist his conclusion.

An Objection Unanswered

It is also worth looking at Gruenke’s response to the proposed analogy between transgenderism and psychological disorders such as anorexia. An anorexic person sees herself as being overweight, even though she is in fact underweight.

It would be silly to doubt the honesty of an anorexic person; though we think there is something wrong with her introspective report, we do not doubt that there is something behind it, that she makes it for some reason. The anorexic person might have brain chemistry similar to that of someone who is overweight. In fact, the chemical imbalance might be a result of some heritable mutation, shared by one’s identical twin. But an anorexic person’s introspective report is nevertheless incorrect.

Honesty, brain chemistry, and genetics are not sufficient to show that someone’s introspective report is correct. Nor are they sufficient to show that bodily change in accordance with the introspective report would be warranted.

Gruenke does not appreciate the force of this objection. She writes:

the analogy between people who are anorexic and those who are transgendered breaks down when we consider the respective goals of the two relevant parts of the brain. The part of the brain that regulates body weight exists so that a healthy weight can be maintained. There is a range for healthy body weight that is the norm; someone with anorexia wants to achieve a body weight that will lead to electrolyte imbalances that can be fatal.

But this first part of the response concedes the force of the objection and merely changes the subject. Again, the objection aims to show the insufficiency of honesty, brain chemistry, and genetics. To point now to some other difference (what perception of gender and perception of weight are supposed to regulate) between transgenderism and anorexia just changes the subject. It concedes that Gruenke’s biological account does not itself show that gender-reassignment surgery is an appropriate response to transgenderism; it concedes that something else is needed.

Moreover, this part of the response shares a flaw with many other responses to proposed analogies: It finds a difference between two things without arguing that it is the decisive or relevant one. Any two things that are analogous in some respect are also different in some other respect. That’s the point of an analogy.

Something else is needed—but what? The response cites a problem with anorexia: The brain is supposed to be regulating body weight. In anorexia, it does not succeed in doing so, and that is dangerous for the individual. Gruenke continues:

Thus in anorexia, subjective perception is clearly at odds with proper function of the human body. On the other hand, the part of the brain that contributes to the perception of gender doesn’t regulate anything, but exists just for psychological identity. One can survive, and even reproduce, without having any gender identity at all.

Note that the insufficiency of the mutation account is still tacitly granted. What’s needed is that, furthermore, the abnormal brain chemistry does not lead to bodily harm.

But this response begs the question against an account of sex and gender identity like Tollefsen’s. For what constitutes bodily harm? If a man’s female self-perception leads him to undergo surgery that renders him infertile, then bodily harm has occurred.

Moreover, we can apply Tollefsen’s account of gender identity too. For Gruenke claims that “the part of the brain that contributes to the perception of gender doesn’t regulate anything, but exists just for psychological identity.” In the paradigm case, sex identity and gender identity match up; this paradigm unity is not pointless. Gender does not merely serve “psychological identity,” then, but modulates how truths about our sexuality are conveyed publicly.

Ordinary Language and Gender

In debates about transgenderism, phrases like “feeling like a woman trapped in a man's body” and “the subjective, internal experience of being a male or female” (Gruenke’s definition of “gender”) are common. But the meaning of these expressions is not immediately clear.

Do you have a feeling of being a man, or a feeling of being a woman? These feelings, if they exist, are not like the feeling of being pinched, the feeling of a hot stove, or the feeling of anxiety. Unless you are “genderfluid,” you have only ever felt like a man or felt like a woman: not both. You have no point of contrast. How do you know that this feeling is the feeling of being a man, as opposed to that of being a woman? Indeed, such a feeling would be an odd sort of feeling—for if a given person felt it persistently, throughout his life, with no point of contrast, then it would be pointless. It would not be a sort of signal, like our senses of pain and of taste are.

To use Thomas Nagel’s famous expression, there is something that it is like to be a bat, and there is something that it is like to be a human, but there is—as far as I can tell—nothing it is like to be male.

Perhaps these expressions are supposed to be elliptical for something, such as possessing a cluster of desires normally possessed by women. Someone inclined to say that he “feels like a woman” might variously desire to, say, wear dresses and high heels, to act like a woman and be treated like a woman. More specifically, the transgender person experiences these desires as frustrated. In that sense, he does not actually feel as most women feel, for even women who want to wear dresses and high heels generally experience those desires as fulfilled.

Now, I don’t want to deny that there is something, objectively speaking, to manhood and womanhood. To cite a few stereotypes, men might tend to be more competitive, women more nurturing. Boys might tend to prefer to play with cars, and girls with dolls. Men and women are sexually different, and (if Tollefsen is right) should develop a gender persona in accordance with their sexual identity. The point is not whether any of these things just listed is true, but that the denial that there is “something it is like” to be a man is not the same as saying there is nothing to manhood.

An Analogy with Marriage

This is why Tollefsen’s accounts of sex identity and gender identity are particularly attractive. What is fundamental is sex identity, which is defined in relation to procreative function. Gender is the social and psychological side of this; one adopts a gender persona in order to express truths about one’s sexuality, and this is essential to being part of a community where one’s sex identity is relevant.

There is a crucial difference between Tollefsen’s account of gender and Gruenke’s account. Adopting a gender persona is not given. It is something that is developed in response to one’s given sexual identity, which provides a sort of vocation—not a fully determinate life plan, but a structure nonetheless. Usually, as Tollefsen notes, society has a lot that is positive to contribute to development of a gender identity. There can sometimes be psychological factors at play that make it more difficult—sometimes extremely difficult—to develop one’s gender persona. Yet cultivating a gender persona remains, in a relevant sense, an active task.

Putting sexual identity in the driver’s seat has important advantages, especially where marriage is concerned. Our reproductive faculties are ordered to reproduction and the education—the rearing—of offspring. A man beset by salacious sexual fantasies, who desires to break apart his family in order to pursue them, ought to do otherwise because his sexual identity is ultimately ordered to his family’s good. His gender persona should be formed to the end to which his sexual identity “calls” him: responsible fatherhood.

Likewise, the orientation of the reproductive powers to reproduction provides a powerful argument that we should seek to conform our gender identity to our sexual identity—not the other way around.

Gregory Brown is a senior mathematics major at Swarthmore College and an editorial intern at Public Discourse.

]]>
http://www.thepublicdiscourse.com/2015/07/15401/feed/ 0