The AAU sexual assault survey and response rates

Predictably, there’s been a lot of criticism and backlash in the wake of the grim numbers reported from the AAU survey on college sexual assault. And some of that criticism has been apt – there are legitimate worries about the survey’s methodology and legitimate worries about the way the figures are being reported. For example, as critics (as well as the researchers involved) rightly point out, it’s a mistake to us the ‘1 in 4’ and ‘1 in 5’ as established, nationally representative statistics. Whether the reporting of these statistics, though, is a sign of ‘alarmism’ about campus sexual assault or just one of many, many instances of media not reporting social science particularly well is questionable, to say the least.

But, as Jennifer Freyd – responding specifically to an article by Emily Yoffe – points out in the Huffington Post, there are other criticisms that look more like denial tactics. Much focus has been put on the relatively broad definition of ‘sexual assault’ used by the survey. Some of this criticism just misreports what the survey actually studies – it’s very explicitly a survey of non-consensual sexual contact obtained by force, threat, or incapacitation (where this has a specific, stated definition.) So, contra some complaints, if your date kisses you without your permission and you weren’t into it, that’s not something the survey would count as a sexual assault. But it’s definitely the case that the definition of ‘sexual assault’ in play is broader than many people might’ve expected, and will almost certainly include some murky cases. But as Freyd notes, it’s will also exclude other cases:

When it comes to the definition of sexual assault, I agree that one can question the decision to include in sexual contact figures various sorts of non-genital touching. But just as importantly one can also question the decision to exclude from both the one in 10 estimates for perpetration and the one in four estimate for sexual contact, cases where the perpetrator did not use physical force or incapacitation but rather relied on verbal coercions and/or failed to get consent and/or failed to heed verbal refusals to initiate sexual contact. While events involving these tactics were measured and reported on by the AAU, they were not included in the widely publicized estimates. Thus, the category of sexual assault that was publicized may be too broad in one sense and much too narrow in another.

Predictably, critics of the study have focused heavily on the cases where it’s definition of sexual assault might over-generalize, while ignoring those cases where it might under-generalize.

The other main concern raised has been about the low response rate. And certainly the response rate is disappointingly low, and a reason to be cautious about the results. But a common refrain has been that the survey almost certainly over-estimates the incidence of sexual assault, because victims are more likely to respond. Is that true? According to Freyd:

an equally plausible self-selection concern is that those who were sexually assaulted are more likely to avoid the survey. In fact, those of us who research and work with survivors of sexual violence know that avoidance is a hallmark of post-trauma response. The pundits, however, only worry about one sort of bias. They essentially claim low response rate equals a disproportionate number of victims in the same. This claim is fundamentally what we call in science an empirical question. What does the empirical evidence have to say about this?

And helpfully, Freyd crunches some numbers for us:

The response rates varied considerably between institutions (from a low of 9.2 percent to a high of 63.2 percent). There was also variation in estimates of sexual assault victimization (for penetration with force or incapacitation the rates varied from a low of 5.7 percent to a high of 14.5 percent; for nonconsensual sexual contact with force or incapacitation the estimates varied between 12.7 percent and 30.3 percent). But are response rates and victimization rates correlated with one another?

If Yoffe and the other critics are right we should see that as the response rate goes up, the victimization estimates go down. What do we actually find? We can ask whether the most publicized victimization statistic — the rate of female undergraduates indicating they experienced nonconsensual sexual contact involving force or incapacitation — is correlated with response rates for female undergraduates. If there is a systematic bias, such that higher response rates lead to lower or higher estimates of sexual violence, we might expect to see that in this relationship. However, the data paint a clear picture of no significant relationship (although trending slightly positively such that higher response rate is associated with higher estimates – the opposite of Yoffe’s claim; with all 27 schools considered r=.08, ns).

There are plenty of things to question about the AAU survey, and – as for any such survey – the results should be treated with appropriate caution. But the endless focus on only the ways in which the results might be biased toward over-stating the problem of campus sexual assault is beginning to look fairly, well, biased.