Reducing Abortions: Responding to Faulty Methodology and Presentation

 
 

Michael New's criticism of a recent study has come in for criticism itself. He responds that the study suffers from methodological mistakes and faulty presentation.

Print Friendly, PDF & Email

Professor Wright and his colleagues deserve credit for undertaking a thorough and methodologically sophisticated analysis of a complicated issue, the incidence of abortion at the state level. I appreciate the thoughtful response of Professor Wright to my criticisms of the Catholics in Alliance for the Common Good study. Unfortunately, his response fails to address some of my concerns with both their presentation and methodology.

Presenting the Findings

While I would not accuse the authors of explicitly hiding their findings on the effect of pro-life legislation, I certainly think that it is fair to say they downplay them. Neither the conclusion nor the executive summary of the CACG paper make any mention of their findings on Medicaid funding of abortion. On page 7 the authors mention that one of their regressions finds that Medicaid funding of abortion increases state abortion rates by 10 percent. However, they quickly point out that this finding is not statistically significant from zero. No mention is made of the fact that this finding comes extremely close to conventional levels of statistical significance. While discussing another regression on page 10, they authors mention that Medicaid funding of abortion increases abortion rates. However, they fail to acknowledge that this finding is statistically significant.

On page 9 and in the conclusion on page 11 the authors are quick to point out that informed and parental consent have not had a significant impact on abortion rates. Their discussion could have been more nuanced. Parental involvement laws directly affect only minors. As such, the authors should have at least mentioned that analyzing their effects on overall abortion rate is not the best way to gauge their actual impact. Similarly, they should have made some mention of the large difference in the effects of enacted and nullified informed consent laws. On page 6 they say, "The difference between states that passed legislation that was implemented and states that passed legislation that was overturned should capture the causal effect of these laws." Therefore, by their own criteria they actually found evidence to suggest that informed consent laws are effective. However, they make no mention of this in the paper.

Weighting

In private correspondence and in their response to my Public Discourse article, the authors seem unwilling to acknowledge that using unweighted data is biasing their results. Both of our studies analyze the effects of state level pro-life laws by comparing abortion rates in states that have adopted these laws to the overall national trend. Since a number of low population states experienced large abortion declines during the 1990s, unweighted data exaggerates this national decline. When abortion rates in states that pass pro-life laws are compared to this exaggerated national decline, it makes it appear that pro-life laws are less effective than they really are.

Professor Wright is correct that data is weighted to correct the error variance problems that he describes. However, social scientists weight data for other reasons as well. Survey data is often weighted to insure that the representation of various demographic groups is consistent with their actual percentage of the population. Insuring that the abortion trends in states that enact pro-life laws are compared to an accurate baseline is a perfectly legitimate reason to weight the data. The weighting methods that Professor Wright suggests would continue to distort this baseline, biasing the results. In my analysis, I realize that there is a multiplier effect with regression data so I weight my data by the square root of the population of women of childbearing age. When the data is weighted in such a manner, we can compare abortion rates in various states to the correct national trend. This allows us to more accurately gauge the impact of different types of state level pro-life legislation

Excluding Data

In their study, the authors exclude data from various years from Florida, Hawaii, Iowa, Louisiana, West Virginia, Wisconsin, and Wyoming. I received this information from them only through private correspondence. They make no mention of this in the CACG study nor do they offer a rationale for excluding these data points. I should also add the appendix in my 2004 Heritage Foundation study contains an error. West Virginia only reported abortions performed on hospitals from 1981 to 1988, not 1981 to 1998.

Professor Wright argues that that the abortion data from Alabama, Iowa, and Illinois that was reported from state hospitals is not biased. He may be correct. However, it seems safer to exclude data that was obtained before a reported change in the collection mechanism. In my analysis I exclude a total of 22 data points from these 3 states. That is a small fraction of the entire data set and does little to weaken the power of the statistical model.

Most importantly, Professor Wright fails to offer any rationale for including data from Kansas. Their inclusion of Kansas is probably one of the main reasons why their results differ from mine. Kansas is a statistical outlier for a couple of reasons. First, for every year between 1990 and 1999, over 40 percent of the abortions in Kansas were performed on out of state residents. This is by far the highest percentage in the country. Second, between 1990 and 1999 the abortion rate in Kansas increased by 60 percent. This is also by far the highest percentage increase in the country. The state with the next highest increase saw their abortion rate go up by only 18 percent during that time period.

Furthermore, since I wrote my original response, I learned that in 1995 abortion reporting became mandatory in Kansas "for every medical care facility and every person licensed to practice medicine and surgery" Furthermore, the Kansas Department of Health and Environment acknowledges that their early 1990s abortion increase may reflect "an increase in the number of abortion providers voluntarily reporting data." As such, I think that there is very good reason to exclude data from Kansas.

Methodology

The authors make use of a sophisticated statistical model that attempts to analyze both the long-term and the short-term effects of various factors on state abortion rates. I am not entirely sure that the sort of model that they use is appropriate for this data set. This is because we lack precise information about the enactment date for some of the pro-life laws and for some of the welfare policies--specifically family caps. This might be biasing their short-term findings.

In my analysis I attempted to replicate their results, but was unable to do so. I collected the same variables for the same set of states and years. The means and variances of the data that I collected are similar to the means and variances reported in Table 4 of the CACG study. I was hoping I could replicate their model because that would have made my critique more persuasive.

Now, it is possible that some of their variables are not having an impact in the short term, but are having long-term effects that my model is not picking up. However, I am somewhat skeptical of this because I think that most decisions about sexual activity and pregnancy resolution are based on short-term factors. If a state suddenly reduced its welfare benefits, I think that the current level of low welfare benefits would do considerably more to affect influence choices about sexual activity and pregnancy resolution than the previous level of high welfare benefits.

More importantly, I have concerns about some the independent variables that are used. The authors argue that increases in state TANF spending per person in poverty results in long-term abortion declines. However, TANF spending per person in poverty is correlated with how well the economy is doing. In most states this variable peaks during the late 1980s and early 1990s and declines throughout the rest of the decade. I have serious doubts that the short-term increase in this variable during the late 1980s and early 1990s led to subsequent abortion declines. I think it is more likely the 1990s abortion decline was caused by the improving economy and the fact that more states were passing pro-life legislation.

Michael J. New is an Assistant Professor at the University of Alabama and a Visiting Fellow at the Witherspoon Institute. He is a contributor to Public Discourse.

Print Friendly, PDF & Email

 

 

Related Reading


 

Web Briefings


PD logo

Want more great articles?

Sign up for daily or weekly emails!

subscribe button