Do Faith-Based Prisons Work?
Do Faith-Based Prisons Work?
by Alexander Volokh
There are a lot of faith-based prison programs out there. As of 2005, 19 states and the federal government had some sort of residential faith-based program, aimed at rehabilitating participating prisoners by teaching them subjects like “ethical decision-making, anger management, victim restitution” and substance abuse in conjunction with religious principles.
One of them – the InnerChange Freedom Initiative program in Iowa – was struck down on Establishment Clause grounds in 2006, but various faith-based prison programs still exist, including InnerChange programs in other states. InnerChange programs, which are explicitly motivated by Christian and Biblical principles, are probably more vulnerable to constitutional challenges; programs that are more interfaith and have less explicitly religious content, like Florida’s Faith- and Character-Based Institutions or the federal Life Connections Program, are probably less so.
Faith-based prisons continue to be promoted as promising avenues for reform, chiefly on the grounds that they improve prison discipline and reduce recidivism. Unfortunately – even if we ignore the constitutional issues – most of the empirical studies of the effectiveness of faith-based prisons have serious methodological problems and, to the extent they find any positive effect of faith-based prisons, can’t be taken at face value. Those few empirical studies that approach methodological validity either fail to show that faith-based prisons reduce recidivism, or provide weak evidence in favor of them.
* * *
The most serious problem with studies of the effectiveness of faith-based prisons is the self-selection problem. Prisoners obviously choose faith-based prisons voluntarily. And the factors that would make a prisoner choose a faith-based prison may also make him less likely to commit crimes in the future. (One such factor might be religiosity itself). Also, a prisoner who takes the trouble to choose a rehabilitative program may be more motivated to change, and this may make him more likely to change.
As a result, faith-based programs might appear to have better results because its participants have lower recidivism rates – but this might have nothing to do with whether the programs actually “work.” A program with zero effect that successfully attracts better prisoners will appear to have better results – in fact, even a program that’s slightly harmful (i.e., has a negative “treatment effect”) might appear to have better results, as long as it attracts prisoners who are sufficiently better (i.e., has a positive “selection effect”). If the positive selection effect is greater than the negative treatment effect, the program might fool naïve observers into thinking it’s a success.
Therefore, what we certainly don’t want to do is just compare the results of participants in a faith-based program with those of non-participants. (Nonetheless, some studies do this!). This presents the self-selection effect in its most naked form – and the results of such a study can’t be taken seriously.
Other studies are slightly more sophisticated. They compare the group of participants with a matched group of non-participants, where non-participants are matched to participants based on various observable factors like race, age, criminal history and the like. Thus, suppose there are 100 participants and 1,000 non-participants. As stated above, we shouldn’t just compare the 100 with the 1,000 – the 100 are systematically different from the 1,000, because the 100 chose to participate and the 1,000 didn’t. The 100 have some sort of motivation that sets them apart from everyone else, even apart from any effectiveness of the program. Instead, what these studies do is take the 1,000 non-participants and identify 100 who “look like” the 100 participants – each of the 100 non-participants is as close as possible to one of the participants in race, sex, age, education and other observable factors. The hope is that comparing the 100 participants with the 100 matched non-participants will make for a more valid comparison.
Alas, this hope is probably unjustified. Even if you could perfectly match the 100 participants with 100 non-participants who looked very similar, you can only match prisoners based on observable factors like race, sex, age and so on. But one of the most important factors – motivation to change – is unobservable. So, in my view, these studies, though somewhat more sophisticated, still aren’t good enough to overcome the self-selection problem.
The third type of study uses a more sophisticated statistical technique called “propensity score” matching. Participants are matched to participants not based on observable factors directly, but based on their propensity score, that is, their estimated probability of participating in the program. But these propensity scores are generated using observable characteristics like race, sex, age, education and so on. Motivation remains unobservable, and that’s still one of the most important factors in whether a released prisoner reoffends. So propensity scores still don’t solve the self-selection problem.
So far, we’ve seen three types of studies – naïve comparisons of participants to non-participants, matching based on some observable characteristics, and matching based on propensity scores. None of these three types of studies are credible because they don’t account for self-selection. Prisoners who are motivated enough to choose to participate in a rehabilitative program are already less likely to reoffend. So any study that compares voluntary participants and voluntary non-participants may just be picking up the effect of being a good person, not the effect of the program itself. (Some of these studies are subject to even further sources of bias. For instance, in addition to self-selection in the decision whether and how intensively to participate, there can be selection by the program staff in the decision of whom to admit or whom to kick out, as well as “success bias” in the consideration only of those who completed the program without dropping out).
In my view, the only credible studies so far fall into a fourth category – those that compare (voluntary) participants in faith-based programs with people who volunteered for the program but were rejected.
Finally, a class of statistically valid studies! Unfortunately, the results from these studies generally aren’t good. In a 2003 evaluation of the Texas InnerChange program, there was no significant difference between how well accepted and rejected volunteers did in terms of two-year arrest or reincarceration rates. Same goes for a 2003 evaluation of the Biblical Correctives to Thinking Errors program at Indiana’s Putnamville Correctional Facility, a 2004 evaluation of the Kairos Horizon Communities in Prison program at Florida’s Tomoka Correctional Institution and a 2009 evaluation of Florida’s dorm-based “faith and character” programs.
I’ve looked at two evaluations of an after-care program for ex-prisoners, the Detroit Transition of Prisoners program. This program may confer some benefits, though it’s hard to say because the results aren’t reported in a form that would make this easy to determine. But even if this program is successful, we still have to grapple with the “resources problem”: The studies compare participation in the program either with the alternative of no program at all or with the “business as usual” alternative of whatever other programs happen to be available, rather than with participation in a comparably funded secular program. Thus, even if a religious program is better than nothing at all, it could be because of the greater access to treatment resources (for instance, mentors and counselors) and not because of the religious content of the program.
* * *
In the end, this article has bad news and good news.
The bad news, as explained above, is that most studies are low-quality and the results of the higher-quality studies aren’t promising. There seems to be little empirical reason to believe that faith-based prisons work.
The good news is that there’s also no proof that they don’t work. The absence of statistically valid or statistically significant findings isn’t the same as the presence of negative findings. And while the self-selection problem is real and important, the resources problem may not even be a problem at all: maybe the “zero alternative” or the “business as usual” alternatives really are proper empirical baselines, since they reflect both reality and, perhaps, political feasibility. So the picture isn’t uniformly bleak; there are some programs that seem to show some statistically significant effects, even if they’re weak and even if we’re not sure how well they compare to the hypothetical effects of a hypothetical, comparably funded secular program.
Perhaps future research will shed light on these questions. In the meantime, clearly some groups want to have faith-based prisons, some prisoners want to attend them and they probably do little if any harm. If some programs don’t work, this is an indication to future practitioners that something needs to be changed; if some programs work, maybe they can be replicated elsewhere. Better results won’t emerge unless they’re allowed to emerge by a process of experimentation.
Alexander Volokh blogs at the Volokh Conspiracy (www.washingtonpost.com/news/volokh-conspiracy) and is an Associate Professor of Law at the Emory University School of Law; this is a synopsis of his research on faith-based prisons, which was published in the Alabama Law Review (Vol. 63, 2011). He provided this article exclusively for Prison Legal News.
As a digital subscriber to Prison Legal News, you can access full text and downloads for this and other premium content.
Already a subscriber? Login