Falsity, inaccuracy, and rape allegations

There is an increasing amount of evidence that the central case in the Rolling Stone article about sexual assault at UVA may not be accurate. People seem to be jumping from ‘may not be accurate’ to ‘is a totally fabricated’ fairly quickly, which, as Think Progress helpfully points out, is a mistake:

False rape allegations are difficult to quantify because so many sexual assault go unreported. But the best research in the field estimates that they are extremely rare. Estimates put the rate of false rape reports around 2.2 percent, and the women who file false claims often receive punishments that are far worse than the consequences for actual college rapists.

For victims of sexual assault who choose to speak out, the proverbial deck is often already stacked against them. They’re up against the societal assumption that they must just want to take down popular frat boys — a narrative solidified by the infamous Duke lacrosse case — and, on top of that, they often aren’t able to tell their stories in believable ways.

Research has found that victims of sexual assault often display erratic behavior, like expressing no emotion, mixing up the chronological order of events, or laughing at inappropriate times. Police officers often interpret that as evidence that they’re not telling the truth. But that behavior actually reflects the disjointed way that the brain processes trauma, not a carefully constructed lie.

Think Progress also offers a useful reminder that whether or not this particular story is a factual story of gang rape, gang rape happens on college campuses. (As noted, some of the skepticism about the story seems to be founded simply on the idea that it’s too terrible to be believed – a bizarre thought, given the reality of sexual assault on college campuses.

41 thoughts on “Falsity, inaccuracy, and rape allegations

  1. Apart from the “innacurate” details disputed by the fraternity (the time and location of the incident and the identity of its lead perpetrator) which are the remaining “facts”?

  2. I agree that, from the fact that a report is not completely accurate, it does not follow that it was entirely fabricated. It may well be that Jackie was actually raped, though her account was inaccurate (it’s even possible that she *lied* about parts of her report but that she was nevertheless raped), in which case it must be really terrible for her to see her assault denied in the press and on the Internet. I think we should be extremely careful in drawing conclusions about particular cases based on publicly available information.

    But I would like to comment on something that Think Progress says in that article, namely that “the best research in the field estimates that [false accusations of rape] are extremely rare”. To support that claim, the author of the article links to a paper of 2010 by Lisak et al. (http://www.icdv.idaho.gov/conference/handouts/False-Allegations.pdf), whic has become a classic. It was also cited by kimberlyvo in a recent comment (https://feministphilosophers.wordpress.com/2014/12/03/skepticism-and-the-uva-rape-story/#comment-134508) on another post about the UVA story.

    The Lisak et al., 2010 study and review of the literature on the prevalence of false accusations of rape is frequently cited, but I think it doesn’t show at all what it claims to show. After conducting their own study and reviewing the literature on the question, the authors of that study concludes that between 2% and 10% of rape accusations are false.

    But if you read the paper, you will see that what it actually shows is that, in the various studies on the prevalence of false accusations of rape, between 2% and 10% of accusations were *determined* to be false. Now, clearly, some accusations that *are* false cannot be *determined* to be false.

    For instance, in the study that Lisak et al. conducted, only 5.9% of the accusations examined were determined to be false, but 44.9% of them were classified as “case did not proceed”. Here is how Lisak et al. explains what this means: “This classification was applied if the report of a sexual assault did not result in a referral for prosecution or disciplinary action because of insufficient evidence or because the victim withdrew from the process or was unable to identify the perpetrator or because the victim mislabeled the incident (e.g., gave a truthful account of the incident, but the incident did not meet the legal elements of the crime of sexual assault).”

    Clearly, some of the accusations classified as “case did not proceed” must have been false (though we don’t know how many), but they were not counted in the 5.9% of accusations that Lisak et al. count as *false* (they couln’t have been included), which are in fact the accusations that were *determined* to be false. Of course, it is also true that not all of the accusations that were classified as “case did not proceed” were false, but that is not a reason to conflate the category of false accusations with that of accusations that were determined to be false.

    Moreover, 13.9% of the accusations were classified as “insufficient” information, which means: “This classification was applied if a report lacked basic information (e.g., neither the victim nor the perpetrator was identified, and there was insufficient information to assign a category).” Those were also not counted in the 5.9% of accusations that are presented as false, but clearly some of them may well have been false.

    Finally, 35.3% of the accusations that were classified as “case proceeded”, which means: “This classification was applied if, after an investigation, the report resulted in a referral for prosecution or disciplinary action or some other administrative action by the university (e.g., the victim elected not to pursue university sanctions, but the alleged perpetrator was barred from a particular building).” But, for all we know, even some of those may have been false.

    So, as far as I can tell, we don’t actually know how often accusations of rape are false, though we know that it must be more than between 2% and 10%. It seems to me highly misleading to present things as Lisak et al., 2010 does. Just to be clear, I’m not saying that a large number of rape accusations are false (I don’t even know what “a high number” is supposed to mean), but only that we don’t know how many are and that Lisak et al. were careless in presenting things as they did. Obviously, that doesn’t mean that people who claim to have been raped should be treated with suspicion by the authorities, I think that’s also very bad.

    I would also like to comment on another claim that is made in the Think Progress article, namely that “false rape allegations are difficult to quantify because so many sexual assault go unreported.” In my experience, when discussing the prevalence of false rape accusations, people often make that point. But while there are excellent reasons to think that sexual assaults very often go unreported, it is important to understand that this is entirely compatible even with a high number of false accusations. (Just to be absolutely clear, I am *not* saying that a high number of rape accusations are false. As I have just explained, I think we don’t know how often such accusations are false.)

    By definition, the prevalence of false accusations of rape only concerns rapes that *have* been reported, so although I agree that it’s a big problem that rape is frequently not reported, it doesn’t make it any more (or less, for that matter) difficult to assess how often accusations of rape are false.

  3. Philippe, a the methodology in the Lisak paper (which has been replicated in many other studies, almost all generating similar results) doesn’t get a 2-10% estimate for allegations *determined* to be false. That study – and many others like it – gets that estimate for allegations *coded* as false. Those are somewhat different things – at least if you’re taking ‘determined’ to mean something like ‘proven’. An allegation can be coded as false without being proven ‘beyond reasonable doubt’ (or whatever) to be false.

    I think you’re right that we don’t know exactly what percentage of rape allegations are likely to be false. But I think there’s an overwhelming amount of evidence that outright-false rape allegations are – whatever the numbers – very rare. See, especially, the discussion here:

    http://bjc.oxfordjournals.org/content/early/2012/08/03/bjc.azs036.full

  4. Furthermore, your comment that the rate of false allegations must be greater than 2-10% is predicated on the idea that all the allegations coded as false are, in fact, false. But that’s something that’s been called into question in criticism of the Lisak et al methodology. For example, allegations are coded as false if they are withdrawn. But it’s been pointed out that members of vulnerable communities can and do withdrawn allegations due to various social pressures, even when those allegations are true.

    https://www.ncjrs.gov/App/Publications/abstract.aspx?ID=255227

  5. Here is what it means, in the Lisak et al. study, for a report to be classified as “false report”: “Applying IACP guidelines, a case was classified as a false report if there was evidence that a thorough investigation was pursued and that the investigation had yielded evidence that the reported sexual assault had in fact not occurred. A thorough investigation would involve, potentially, multiple interviews of the alleged perpetrator, the victim, and other witnesses, and where applicable, the collection of other forensic evidence (e.g., medical records, security camera records). For example, if key elements of a victim’s account of an assault were internally inconsistent and directly contradicted by multiple witnesses and if the victim then altered those key elements of his or her account, investigators might conclude that the report was false. That conclusion would have been based not on a single inter- view, or on intuitions about the credibility of the victim, but on a “preponderance” of evidence gathered over the course of a thorough investigation.”

    It’s true that, even when a report is judged to be false based on those guidelines, it may well in fact be true. After all, even if there is evidence that a rape didn’t occur and no evidence that it did occur, it may nevertheless have occurred. So you’re right that my conclusion that, although we don’t know how often accusations of rape are false, we know that it must be more than between 2% and 10%, doesn’t follow. And I agree that my description of those reports as “determined to be false” was misleading. Basically, you’re pointing out that, just as we should not conflate the category of reports that were not judged to be false with that of reports that are in fact true, we should not conflate the category of reports that were judged to be false with that of reports that were actually false. This strikes me as entirely correct and I’m happy to stand corrected on that point. It’s still true that we don’t know how frequently rape accusations are false, but it seems that we know even less than what I initially claimed.

    But let’s have a closer look at what Lisak et al. found in the study they conducted. Here is how they reports the results of that study: “Of the 136 cases of sexual assault 8 (5.9%) were coded as false reports, 61 (44.9%) did not proceed to any prosecution or disciplinary action, 48 (35.3%) were referred for prosecution or disciplinary action, and 19 (13.9%) contained insufficient information to be coded (see Table 2). It should be noted that in no case did the research team “override” the classification of a false report made by the police department. The eight cases that were described as false reports by the police investigators were also categorized that way by the coders.
    Of the eight false reports, three involved clear admissions from complainants that they had fabricated the report for ulterior motives, and a fourth investigation yielded a partial admission, combined with other evidence that facts had been fabricated. Three cases were coded as false reports after extensive police investigation—multiple witness interviews and careful fact checking—yielded evidence that the reports were fabricated, even though the complainant did not ultimately state that her report was false. A final case was coded as a false report even though it was complex and ambiguous. The complainant recanted her report, but the facts yielded by the investigation suggested that her initial report was as much a mislabeling of the incident as a deliberate effort to fabricate.”

    (It should be noted that Lisak et al. did not automatically classify a report as “false” as long as it had been withdrawn, since some of the reports that have been withdrawn were classified as “case did not proceed”. It seems that a report was classified as “false” when it wad been withdrawn only if a thorough investigation had yielded evidence that no attack occurred and/or the victim admitted that she had “fabricated the report for ulterior motives”.)

    Now, let’s assume that only 50% of the reports that were classified as “false report” were actually false, that 90% of the reports that were classified as “case did not proceed” were in fact true, that 100% of the reports classified as “case proceeded” were true and that 100% of the reports classified as “insufficient information” were also true. I think anyone who has read the criteria used to classify the reports in those 4 categories, as well as the information on the cases that were classified as “false report”, will agree that those assumptions err significantly on the side of caution, i. e. they are extremely likely to underestimate the number of reports that were actually false. Yet, even on such favorable assumptions, the number of reports that are actually false come out as 10, i. e. 7.3% of the reports examined by Lisak et al. This is still more than 5.9%, the proportion of reports which Lisak et al. presented as false, although they were only the reports that were classified as “false report”.

    I haven’t done the same thing for the other studies that Lisak et al. mention and which claim to have found a prevalence of false reports between 2% and 10%, but since they all suffer from the same basic defect as the study Lisak et al. conducted (i. e. they conflate not being judged to be false with being true and being judged to be false with being false), I suspect the same thing is true (although ultimately it depends on the distributions of the report in the different categories, but I’m assuming it’s similar in all of those studies). So it seems to me reasonable to conclude that, although we don’t know how often accusations of rape are false (that much seems clear), we have some reason to think it’s *probably* more than between 2% and 10%.

    Oh and thanks for the 2 papers you mentioned, I don’t have time to read them right now, but I will later!

  6. Yes, I should’ve said reports were coded as false when the reporter *recanted* them, rather than simply withdrew them. But, as has been pointed out, people in vulnerable positions are often pressured into recanting, for a variety of reasons.

    There’s quite a lot of literature about the methodology in these type of studies, done by trained social scientists. This work is, to my mind, a better stepping off point than speculative calculations. Whatever the specific numbers are – and however we might quibble about those – the overarching point is that there’s a lot of evidence that false allegations of rape are *rare* (and not more common than false allegations of other crimes).

  7. To be clear, I wasn’t suggesting that accusations of rape are more often false than for other types of crime, I have no idea about that. Since I think we don’t know how often accusations of rape are false, I don’t see how anyone could say that, though unfortunately I have no doubt that some people do. But, for the same reason, it’s not clear to me how we could know that accusations of rape are just as often true as for other types of crime.

    Of course, there may be evidence that false accusations of rape are no more common than false allegations of other types of crime, which is not the same thing, but I don’t know about that so I’m reserving judgment. (If you know of a paper that substantiate that claim, however, I would be interested in reading it. Sorry if one of the 2 papers you already mentioned above satisfies that description, but as I said before, I haven’t had time to read them yet.)

    All I’m saying is that, based on Lisak et al., 2010, we are not justified in concluding that between 2% and 10% of rape accusations are false, because the studies it reviews and the one it presents conflate “being judged not to be false” with “being true” and, as you pointed out in correcting me, “being judged to be false” with “being false”. For all we know, the prevalence of false rape accusations may be higher and it may also be lower, though for reasons indicated above I doubt the latter is the case.

  8. I would say one last thing about studies like Lisak et al., 2010 and why I think that, although it’s possible that they overestimate the prevalence of false allegations of rape, it seems more likely that they underestimate it. It’s true that, as magicalersatz pointed out, this judgment is speculative, but I think it’s useful to say exactly why and to what extent it’s speculative.

    The researchers who produce those studies have examined a number of allegations of rape and judged some of them to be false. What they present as the prevalence of false allegations which can be inferred from the data is really the ratio of the number of allegations which they examined and judged to be false on the number of allegations they examined in total. Of course, as I already explained above, this is *not* the prevalence of false allegations of rape (not even in the samples examined in those studies), only the proportion of allegations those researchers judged to be false in the samples they examined.

    Now, it can easily be proved – it just takes a few elementary algebric manipulations, so nothing substantive about the prevalence of false allegations of rape enters into the proof – that a study of that sort underestimate the prevalence of false allegations of rape if and only if the proportion of reports examined by the researchers which they judged to be false even though they were true is less than the proportion of reports which they did not judge to be false even though they were false.

    So, my claim that it’s more likely that studies of the sort examined in Lisak et al., 2010 underestimate the prevalence of false allegations of rape than they overestimate it is speculative exactly to the extent that it’s speculative to think that the proportion of reports examined by the researchers which they judged to be false even though they were true is less than the proportion of reports which they did not judge to be false even though they were false.

    Of course, that assumption *is* speculative, but at least in the case of the study conducted by Lisak et al., for which I have some details about how they made their judgments and about the particular cases they examined, I find it hard to imagine that anyone reasonable would disagree with it.

  9. Philippe, I disagree with much of what you say, but I’m running short on time to write an adequate reply (trying to finish my term papers). I just wanted to say quickly, though, that it is extremely important to not conflate rape reports with rape accusations. These are two different things, and it matters to the numbers. ‘Rape accusations’ refers to when a particular person or persons are accused of rape. ‘Rape report’ just refers to when a rape is reported. False accusations are even less common than false reports.

  10. Kathryn, thanks for your remark, the point is well taken. It’s true that I’ve been careless in using the term “accusation” interchangeably with “report”. In everything that I say above, you should read “report” every time I wrote “accusation”. Perhaps that’s enough to take care of what you took to be a disagreement between us, but if not, then I would be interested in knowing what you think is wrong with what I said when you have more time.

  11. I have read Candida Saunders’s “The Truth, the Half-Truth, and Nothing Like the Truth”, which magicalersatz cited above. I thought it might be useful to briefly summarize it for those who don’t have time to read it.

    Saunders argues that, while researchers and law enforcement/criminal justice professionals seem to disagree on the prevalence of false allegations of rape, this disagreement might to some extent be verbal. That’s because researchers and law enforcement/criminal justice professionals don’t classify as “false” the same thing.

    She makes a distinction between false complaints and false accounts, which is helpful to understand what’s going on. (It should be noted at the outset that, unless I misunderstood Kathryn, this distinction is *not* the distinction between false accusations and false reports she drew in her comment above.)

    A false complaint is a report that law enforcement/criminal justice professionals classified as “false” because they judged that it had been completely fabricated and that in fact no rape had occurred. By contrast, a false account is a report they classified as “false” even though they believe a rape did occur, because the complainant more or less explicitly lied about the circumstances in which the assault occurred.

    Saunders argues that, if you explained that distinction to law enforcement/criminal justice professionals and asked them what they think the prevalence of false *complaints* is, you would get results more in line with the numbers found by researchers.

    She also makes the point, which I think is interesting, that we should not ignore the phenomenon of false *accounts*, because their relative frequency might go some way to explain why conviction rates for rape are so low, which law enforcement/criminal justice professionals are blamed for.

    That’s the main point of Saunders’s article, but at the beginning, she also makes the point that I have made earlier, namely that allegations researchers do not classify as “false” may nevertheless be false. And, as was already noted above, we might add that allegations they classify as “false” may actually be true.

    Of course, the same thing is true for what law enforcement/criminal justice professionals classify as “false” and what they don’t, but obviously not in the same way.

    I think it’s helpful to make that a little bit formal to clarify the situation. In what follows, what I call “investigators” can be either researchers or law enforcement/criminal justice professionals, as the case may be.

    (I’m ignoring Kathryn’s distinction between false accusations and false reports here, but that shouldn’t affect my point. I’m using “allegation” to talk about everything she calls a “report”, even when the allegation does not target specific individuals.)

    r = proportion of allegations judged to be false by the investigators that are indeed false
    s = proportion of allegations not judged to be false by the investigators even though they are
    x = number of allegations judged to be false by the investigators
    n = number of allegations examined by the investigators in total

    The investigators get the proportion of false allegations right iff the number of allegations they judge to be false is the same as the number of allegations that are indeed false, i. e. x = rx + s(n – x).

    Since this equality can easily be shown to be equivalent to (1 – r)x/(n – x) = s, it means that investigators get the proportion of false allegations right iff the proportion of allegations they judged to be false even though they were true is equal to the proportion of allegations they did not judge to be false even though they were. So far, I don’t see how anyone could deny that, it’s just straightforward conceptual analysis and basic arithmetic.

    My point is that, whether the investigators be researchers or law enforcement/criminal justice professionals, we have no reason to think that this equality holds. Therefore we are not warranted in taking their estimation of the prevalence of false allegations of rape to be accurate, not even if the samples of allegations they have examined is representative.

    In fact, I want to make a stronger point, namely that we have very good reasons to think this equality does *not* hold, and that it fails to hold in opposite ways depending on whether the investigators are researchers or law enforcement/criminal justice professionals.

    Indeed, I think anyone who has read Lisak et al., 2010 carefully will conclude that, when the investigators are researchers who produced studies which Lisak et al. would deem kosher, the proportion of allegations they judged to be false even though they were true is extremely likely to be *less* than the proportion of allegations they did not judge to be false even though they were.

    Conversely, I think that Saunders’s paper strongly suggests that, when the investigators are law enforcement/criminal justice professionals, the proportion of allegations they judged to be false even though they were true is extremely likely to be *greater* than the proportion of allegations they did not judge to be false even though they were.

    So I think that, not only do we not know that between 2% and 10% of rape allegations are false (the range toward which studies deemed kosher by Lisak et al. converge), despite what is often claimed by researchers and activists alike (that much should be absolutely uncontroversial), but moreover we have good reasons to believe that the prevalence of false rape allegations must be somewhere between the estimate of researchers and that of law enforcement/criminal justice professionals. That is to say, higher than between 2% and 10% (though I suppose it could be between 2% and 10%, my point is just that researchers probably underestimate the prevalence of false allegations in the samples they examine), but lower than what law enforcement/criminal justice professionals are prone to say when they are not asked to distinguish between false complaints and false accounts in the sense of Saunders.

  12. Couldn’t “false accounts” be the product, not of lying, but of confusion due to post traumatic stress or to the simple fact that during a difficult experience, such as a rape, most people are too scared or too freaked out to accurately recall details?

  13. It most certainly could and probably sometimes is. If I understood Saunders correctly, for a report to be a false account in her sense, the law enforcement and criminal justice professionals who examined it must have judged that the complainant more or less explicitly lied, but of course they could be wrong about that. (I realize that my explanation of the concept above is a bit misleading, because it suggests that a report is a false account only if the complainant *actually* lied.) In fact, strictly speaking, not only could they be wrong in judging that the complainant knowingly made false claims, but they could also be wrong about the fact that what the complainant said was indeed wrong. (Although, based on the examples Saunders discusses in the paper, I doubt the latter happens very often.) So that’s right, thanks for your remark.

  14. Thanks for the summary, Philippe! I’d like to add two key points from the Saunders paper (one of which was implicit in your summary, but which I’d like to make more explicit). Her structured interviews suggest (though of course don’t prove) that false complaints (i.e., complaints of rape where rape has not in fact occurred) are, unlike ‘false accounts’, very rare. Her interviews also suggest that false complaints are often – though of course not always – correlated with duress on the part of the complainant. So, for example, women tend make false complaints in the context abusive or dysfunctional relationships, mental illness, etc.

    I think we agree, Philippe, that we don’t – and possibly can’t know – the rate of false accusations of rape. (I should note that I’m speaking only for myself here, and others on the blog will disagree with me on this – we are, as we continue to emphasize, not a monolith of opinion here.) As the link Matt gives helpfully puts it, it isn’t clear how this is something we could ever really know for certain, or something that social science could decisively investigate.

    Nevertheless, I think we’ve got a variety of pieces of evidence which jointly *suggest* that we’re at the very least *justified in believing* (let’s leave knowledge out of it, in case that creates noise) that false accusations and/or reports of rape are rare. We have quite a lot of evidence that reports which can be coded as false are rare. We also have evidence from structured interviews and the like (present in the Saunders paper, and many others) suggesting that false reports are rare. And finally, we have the very simple, and very obvious, social fact that reporting a rape – whether it’s a true report or a false one – is a very risky thing for a woman (or a man!) to do. Going public with an allegation or report of rape is routinely a process filled with skepticism, shame, and humiliation. Going public with a *false* report is quite clearly something that’s extraordinarily high risk. (Think about how vividly we remember high-profile reports of rape taken to be false. In the conversation about the UVA case, people routinely reference Duke lacrosse and Tawana Brawley*, on the assumption everyone will know what they’re talking about. Name a rape victim who lives in our memory that strongly.)

    It really seems that to think that false reports of rape are anything other than rare, given this context, you’d have to think that either women are inveterate liars or women are crazy. A lot of people probably think either or both of these, at varying levels explicitness. But I don’t think either, and I don’t think the jury should be out on either until we have some study from social science to settle the question.

    Anyway, while we probably disagree on some issues, I appreciate your willingness to come here and have this conversation openly and honestly.

    *I want to be clear that I’m not assuming Brawley’s story is false – I’m just using it as an example of one assumed to be false. She has, apparently, never retracted it. Her case, though, is a good example of associated trauma. It seems pretty clear that whether or not her story is false, *something* traumatic happened to her.

  15. I’ll also add that it’s really important not to frame this issue (as my above comment probably does – sorry!) as one of doubting the word of women vs. protecting the rights of accused men. It’s easy to forget that many victims of rape are themselves men. (And, on occasion, perpetrators are women.) That point is particularly apt in discussing the UVA case, where the issue is rape culture in fraternities. Men are raped in fraternities, just as women are, but it’s something we don’t talk about quite as much.

  16. Magicalersatz, I’m in total agreement. Trauma may affect one’s testimony, but it also may not. I think that the lingering sense that rape reports are just getting at guys may well draw on unfortunate stereotypes that have come down to us from over the centuries.

  17. Well, “rare” is a vague term, but of course that doesn’t mean we can never assert sentences that contain it. In particular, I personally never doubted that false rape allegations were rare, though I don’t necessarily agree with all the reasons you give to justify that claim. But I think we should also make clear that, in that context, the threshold for “rare” to be applicable is not so low that, if false rape accusations are rare, then we can be casual about some of the most basic principles of a liberal justice. (It would have to be incredibly low for this to be the case.)

    I know that nobody said anything like that in this conversation and I suspect that many will think it goes without saying, but as Talleyrand famously said in another context, “if it goes without saying, then it will go even better by saying it.” At least, given how universities often seem to handle sexual misconduct complaints, I’m personally not comfortable saying that false rape allegations are rare without making that precision. Of course, I agree that we should also say that universities and law enforcement/criminal justice professionals must take seriously every allegation of rape and not assume the complainant is lying, but I think that would be true even if false rape allegations were not rare.

    So, although I’m sure we disagree about a number of things, you’re right that it doesn’t look as though we disagree that false rape allegations are rare. But it was never my point to deny that, my point was only to deny that studies have shown that the prevalence of false rape allegations is between 2% and 10% (something activists and researchers alike repeat ad nauseam), because not only is it clear that the studies in question have shown no such thing, but in fact I think we have good reasons to think they underestimate the prevalence of false rape allegations.

    That’s all I ever said and, as far as I can tell, it’s a pretty weak and well supported claim. In particular, it doesn’t imply that false rape allegations are not rare, which I think they are. And, for what it’s worth, I also appreciate that we were able to have this conversation here. I actually think it was productive and interesting. At the very least, it’s helped me clarify my own thought on the subject.

  18. a false account is a report they classified as “false” even though they believe a rape did occur, because the complainant more or less explicitly lied about the circumstances in which the assault occurred.

    I am glad that I do not, personally, have a lot of experience with rape or sexual assault complaints. I expect that I would find that work to be extremely depressing in a way that would be bad for me as a person if I were to do it. I admire those who can do it. but, I do have a fair amount of experience, from two sources, that are somewhat relevant for thinking about these issues, or so I think. First, my father is a retired police office, and I both spend a lot of time talking with him about police work and also rode around with him while he worked on perhaps 100 or more occasions. Secondly, one of my main areas of legal work – asylum claims – also involves an area of law where there are fairly strict legal categories and where important consequences turn on credibility determinations made by government officials who may not be especially predisposed to accept the stories told by the claimants. One thing that I’ve learned in both of these cases is that, _even when something very bad that the law can do something about_ has happened, there is a strong impulse, completely understandable, to fit one’s situation into a somewhat cleaner or nicer version, on the belief (not always unwarranted) that the applicant is more likely to get legal redress if they do this. In a significant number of case (I can’t speculate how many, but it’s much more than zero), this backfires. It can lead to someone who, say, has a perfectly plausible if not slam-dunk asylum claim being found to be not credible by an immigration judge or asylum officer, because the bits added to or removed from the story in hopes that it would be “better” in fact lead to inconsistencies. In other cases, these “inconsistencies” come in because the applicant is embarrassed or ashamed of some aspect of what happened, and tries to cut that part out or change it. In either case, the inference from the fact that there are some problems with the story to the conclusion that it’s all made up or nothing problematic happened need not be a good one. While I can’t say that I know for sure, I would be very surprised if these dynamics didn’t take place in rape or sexual assault cases.

  19. What you describe about how complainants can end up lying to the authorities in a way that often backfires on them corresponds exactly to what the law enforcement/criminal justice professionals Saunders interviewed have told them. I think it makes a lot of sense, when you think about it. But, according to what she says in her paper, it seems that in general law enforcement/criminal justice professionals don’t conclude that no rape occurred just because the complainant did that. In other words, although they call that kind of reports “false”, it’s not because they think that no rape occurred. From what Saunders explains in her paper, it seems they only reach that conclusion if there is strong evidence that indicates it.

  20. I meant: “what the law enforcement/criminal justice professionals Saunders interviewed have told *her*”.

  21. To clarify, Saunders (iirc) only interviews law enforcement. If ‘criminal justice professionals’ include lawyers, judges, etc, then they’re outside the scope of her interviews. Part of her point is that people on the legal side of things may be over-estimating the perceived prevalence of false complaints because of the conflation between false complaints and false accounts, which is in keeping with what Matt says above.

  22. Well, she interviewed several prosecutors, don’t those count as “criminal justice professionals”? (At least the prosecutors are those I was thinking about whan *I* talked about “criminal justice professionals”.) In fact, I think she mostly talked to prosecutors, though she also had a few interviews with police.

    As for the rest, maybe I wasn’t clear, but I was just pointing out that it doesn’t look as though the people Saunders interviewed conclude that no rape occurred just because they believe the complainant lied about certain details, which I took to be Matt’s worry.

    On the contrary, from what Saunders explains, it seems that they understand perfectly well the kind of things Matt talked about (what they told her about that in fact sounds a lot like what he explained), i. e. the kind of reasons that lead complainants to lie about what exactly happened even when they were indeed raped.

    In fact, if I remember correctly, the people she interviewed were kind of pissed off because often they believe a rape did occur but the lies make it difficult to get a conviction. So I don’t think they generally make the mistake Matt were afraid they might be making.

  23. She interviewed police and Crown Prosecutors, not general lawyers and judges. (Sorry, I should’ve said Crown Prosecutors as well in my previous comment.) She also hypothesizes that the claims of such professionals that false accounts are common causes confusion, and makes those professionals more likely to resist, and feel themselves at odds with, helpful research coming out of the social sciences – even when they’re the people that could be a ‘front line for change’, as she puts it. All of this seems in keeping with Matt’s comment.

    I think this will be my last comment on the Saunders paper.

  24. Philippe, I’m not sure I follow your argument here. You were saying:
    “Now, it can easily be proved – it just takes a few elementary algebric manipulations, so nothing substantive about the prevalence of false allegations of rape enters into the proof – that a study of that sort underestimate the prevalence of false allegations of rape if and only if the proportion of reports examined by the researchers which they judged to be false even though they were true is less than the proportion of reports which they did not judge to be false even though they were false.”

    A later post went into further detail:
    “The investigators get the proportion of false allegations right iff the number of allegations they judge to be false is the same as the number of allegations that are indeed false, i. e. x = rx + s(n – x). Since this equality can easily be shown to be equivalent to (1 – r)x/(n – x) = s, it means that investigators get the proportion of false allegations right iff the proportion of allegations they judged to be false even though they were true is equal to the proportion of allegations they did not judge to be false even though they were. So far, I don’t see how anyone could deny that, it’s just straightforward conceptual analysis and basic arithmetic.

    r = proportion of allegations judged to be false by the investigators that are indeed false
    s = proportion of allegations not judged to be false by the investigators even though they are
    x = number of allegations judged to be false by the investigators
    n = number of allegations examined by the investigators in total

    I did not follow how you represented, “the proportion of reports examined by the researchers which they judged to be false even though they were true”, and thus how you determined a relationship between r and s. At any rate, I take the most contentious issue here to be how to estimate s. In your hypothetical you offered a “conservative” estimate that 10% of the reported rapes that proceeded to no further disciplinary action or prosecution might be false reports. I do not find that a conservative estimate and would like to know why such an assumption should be accepted in the absence of further evidence to indicate its likelihood. The argument seems to be that we can reject Lisak’s approach out of hand because it refuses to call a report false unless some evidence for this conclusion was available, and since this will only occur for some subset of all false reports, an unknown but greater number of the total reports must be false. Thus, the unaccounted-for “false but assumed true in the absence of other evidence” reports must be lurking within the other categories. However, since the number of reports coded as false is much smaller than the number that don’t proceed any further, if the same proportion is incorrect in each group then by definition the total number of false reports will have been “underestimated” on this analysis. Put another way, you’ve begun with assumptions that stack the deck in favor of a certain outcome, but would first need to explain why any significant proportion of cases that don’t proceed are likely to be false reports.

    Suppose that we flipped things around and applied this analysis while substituting for the ‘x’ of false reports those cases that were either found to have supporting evidence or resulted in sanctions against a perpetrator (these are equivalent standards, since the false reports here included both admissions and cases involving some supporting evidence of fabrication or falsity). We have many good reasons to suppose that additional reports are “true” despite not having been identified as such: the difficulties of obtaining physical evidence, the reluctance to level an accusation or offer testimony (because again, a report is a broader category here than an accusation), the absence of other witnesses, and so on.

    Finally, I wish to emphasize that this study concerned reports on a university campus. Can conclusions drawn from such research be extended to conclusions about rape reports in general? I’m not sure they can.

  25. Susan, thanks for your comment, you actually point out a really stupid mistake I made in what I said above! I interpreted (1 – r)x/(n – x) as the proportion of allegations examined by the investigators which they judged to be false even though they were true, but that’s not at all what it means. In fact, for the reason you point out (the number of allegations judged to be false is less than the number of allegations they did not judge to be false), it couldn’t possibly mean that. That’s because I read (1 – r)x/n instead of (1 – r)x/(n – x) when I wrote down the equation on a piece of paper.

    What I should have said, instead, is that x = rx + s(n – x) iff x(1 – r) = s(n – x), i. e. the investigators correctly estimate the number of false rape allegations iff the number of allegations they judge to be false even though they are true is equal to the number of allegations they do not judge to be false even though they are. This makes a lot more sense :-) Now my original point is that, when the investigators are researchers such as Lisak et al., the number of allegations they judge to be false even though they are true is likely to be less than the number of allegations they do not judge to be false even though they are.

    That’s because, as you pointed out yourself, the number of allegations they judge to be false is much smaller than the number of allegations they do not judge to be false. So, even if the proportion of allegations they judge to be false even though they are true is quite large and the proportion of allegations they do not judge to be false even though they are is very small, there are still more allegations they did not judge to be false even though they were than allegations they judge to be false even though they are true.

    In fact, it’s easy to see that x = rx + s(n – x) iff (1 – r)/s = (n – x)/x, i. e. the investigators get the number of false allegations right iff the ratio of the proportion of allegations they judge to be false even though they are true on the proportion of allegations they do not judge to be false even though they are is equal to the ratio of the number of allegations they do not judge to be false on the number of allegations they judge to be false.

    In the case of the study that Lisak et al. conducted, (n – x)/x = 16. So, for their estimate of the prevalence of false rape allegations in the sample they examined to be correct, it would have to be the case that (1 – r)/s = 16, i. e. it would have to be the case that the proportion of allegations they judged to be false even though they were true be 16 times the proportion of allegations they did not judge to be false even though they were.

    Now, based on what Lisak et al. say about how they made the decision to classify a report as “false”, I think everyone will at least agree that, in the case of (1 – r), 50% is an upper bound. Based on that “worst case scenario”, it follows that, for Lisak et al. to get the number of false allegations right, s would have to be as small as 3.1%. And, if (1 – r) were in fact equal to 25%, which based on what Lisak et al. say about the reports they judged to be false doesn’t seem extravagant at all, s would have to be 1.5%.

    Thus, you’re right to point out that, ultimately, the really contentious point is how small s actually is. And, of course, we don’t *know* that. But, quite frankly, if you were *forced* to bet a large amount of money on whether s = 3.1%, after reading what Lisak et al. say about the reports they did not classify as “false”, I’m having a hard time to believe that you would choose the former. As I admitted in response to magicalersatz above, this is speculation on my part, but honestly it doesn’t sound like a crazy sort of speculation to me.

  26. By the way, I should add that I agree with you on the last thing you said, we should keep in mind that Lisak et al. examined a sample that we have no reason to think is representative of the general population with respect to the prevalence of false rape allegations. But, since my point is that Lisak’s conclusions are not even warranted for the sample he and his collaborators examined, I obviously don’t think we should generalize them anyway.

  27. In comment #27, I meant to write “if you were *forced* to bet a large amount of money on whether s = 3.1%”, not “if you were *forced* to bet a large amount of money on whether s = 3.1%”. (Sorry, as this conversation has amply demonstrated, I really should check my comments before I post them.)

  28. Okay, I think I understand what’s going on, I used the characters “” in that order and WordPress must have taken that to be a HTML tag, which is why it didn’t print it. In comment #27, you should read “if you were *forced* to bet a large amount of money on whether s is less than or equal to 3.1% or greater than 3.1%”, not “if you were *forced* to bet a large amount of money on whether s = 3.1%”.

  29. Wow, I’m really stupid, I did it again. I meant that I used the symbol “less than” and the symbol “greater than” in that order, so WordPress didn’t display them or anything that was in between them, thinking it was a HTML tag. Sorry for polluting the comments with that, which probably only makes sense for me.

  30. Philippe,

    Don’t be so hard on yourself. I’m amazed at how hard you’ve worked to explain the whole issue to the rest of us. Thank you for your efforts.

  31. Philippe, thank you for clarifying, that was helpful. I am still not sure why the proportions have to be related in this way; it seems like, for instance, if investigators label 10 out of 100 cases as false reports, but it turns out that one of those ten was a true report and one of the 90 deemed true was a false report, then they’ve still achieved the correct figure, however inadvertently.

    Where we disagree, however, is whether it should be assumed that additional reports not coded as false were actually false, and whether we can make reasonable estimates about false rape reports based on a study like Lisak et al. One of the most valuable things about the study is the way it explains the methods used to code cases, and argues for identifying reports as false only when there is a legitimate reason to suppose this is the case. Otherwise we will end up with cases termed “false” when a rape actually did occur.

    The assumption of the analysis above seems to be that the truth of how many reports were false is probably different by some unknown amount from the study’s finding – I don’t know that a precise meaning has been given to this category yet, since it will be a different meaning from the one assigned by Lisak and other such studies, but presumably it includes things like “reporting a rape when no such thing occurred, either because the person making the report believed the encounter was consensual, or there was never any encounter”. But isn’t this precisely what the investigations and the study itself were supposed to determine the likelihood of? When evidence indicated that no encounter occurred or the person reporting believed the encounter was consensual, then it was called a “false” report. In the absence of such evidence, why would we assume that a certain percentage of “nevertheless false” reports must have been misinterpreted? Perhaps in any human endeavor we must account for some degree of error, but where and why are we supposed to infer error when it contradicts the actual evidence about the cases examined? It should be sufficient to acknowledge that a particular outcome could have erred in either direction.

    Lisak’s outcome falls roughly in the middle of outcomes of other studies that employed comparable methodologies, yet I wouldn’t say we have so much evidence that confident assertions may be made about a particular percentage. Presumably that’s why even the researchers suggest the percentage could be a few points higher or lower. Why shouldn’t we conclude, then, based on studies that used the best methodology to date, the rate of false reports is probably somewhere between 2 and 10%?

  32. Philippe, thank you for clarifying, that was helpful. I am still not sure why the proportions have to be related in this way; it seems like, for instance, if investigators label 10 out of 100 cases as false reports, but it turns out that one of those ten was a true report and one of the 90 deemed true was a false report, then they’ve still achieved the correct figure, however inadvertently.

    Actually, I don’t think this example shows that my conclusion that x = rx + s(n – x) iff (1 – r)/s = (n – x)/x is false, on the contrary. In the case you’re describing, r = 9/10, s = 1/90, x = 10 and n = 100, so x = rx + s(n – x) and, therefore, the investigators get the prevalence of false rape allegations in their sample right. But, in that case, (1 – r)/s = (n – x)/x = 9, exactly as predicted. I don’t see how it could be otherwise, since x = rx + s(n – x) iff (1 – r)/s = (n – x)/x holds in virtue of arithmetic alone. But maybe I didn’t understand you correctly, please let me know if that didn’t help.

    As for your other point, I think you’re right that it’s important to define the notion of a false rape accusation, but Lisak et al. don’t do that. They *say* they do, but they don’t. What they do is explain the criteria they used to classify a report as “false”, but that’s not a definition of false report, at best it’s an *operational* definition. From what they say in the paper, one can infer the definition of false report they are assuming (which I think must be something like “a rape did not occur and the person who made the report knew it”), but they never explicitly state it.

    Only the crudest sort of operationalist would insist that a false report just *is* what meets the criteria Lisak et al. used to classify reports as “false”. The point is not really that Lisak et al., since they are only human, are fallible and may classify a report as “false” even though it doesn’t meet their criteria for doing so, though of course this can also happen. It’s that, regardless of that problem, a report that meets their criteria for being classified as “false” may well be in fact true and, conversely, a report which does not meet their criteria may nevertheless be false.

    I have no doubt that even Lisak et al., if pressed, would not deny that, because they could only do that by endorsing full-blown operationalism and, in that case, it’s obviously false. Surely, some rape allegations are false even if no evidence shows it, just as many are true even though there is no evidence that a rape occurred. Now, as I explained above, in order for their conclusions to be correct about the sample they examined, the relation (1 – r)/s = (n – x)/x *must* obtain. But, as I argued using the data of Lisak et al., 2010, it seems quite implausible that this condition obtains and, in fact, much more plausible that it fails to obtain in a way that led them to underestimate the prevalence of false rape allegations in the sample they examined.

  33. Philippe, my point is that your equation merely tells us how to determine, for any given number that turns out to be the number of false reports identified by the study, whether it was overestimated or underestimated relative to some “actual” number of false reports. This was never at issue, since the question is why the “actual” number would differ in one direction or another beyond the usual margin of error in such a study. In other words, relating the proportions in this way puts us no closer to determining whether the false report number is more likely to be an underestimation. I assume you have reasons for thinking underestimation is likelier, unrelated to the numbers of cases in one category or the other, and these are the reasons I’d like to discover and consider.

    I agree with you that the Lisak study doesn’t define false report in the way mentioned above, but it doesn’t need to for the purpose of classifying cases given the available evidence. Obviously the kind of result they obtain this way is different from the result we could get if we had access to the complete set of facts about what happened in all cases, but none of us will ever have those facts, so 2 – 10% is the closest we can get to a reasonable and evidence-informed estimate. That’s not operationalism unless most of what we do in the legal system involves operationalism of a similar kind.

    I take your point to be that any gap between the available evidence and reality, regardless of its size, is more likely to cause underestimation of false reports than the opposite. However, if we consider how the categories are determined I don’t reach the same conclusion and would expect underestimation to remain within the margin of error. For example, I could decide to tell the police I was raped while refusing to give my name or even the rapists name, even as I am fully aware that nothing happened. I bet this sort of thing already has occurred in the course of human events, but I bet it has happened less often than a true rape victim changing the story after becoming unable to withstand the pressure to recant. Likewise for persons who take the trouble to fabricate a charge of rape only to find to their surprise and dismay that the accusation doesn’t meet required elements of the crime, though I admit some liars aren’t very clever. Some victims who are faking it cannot identify their perpetrators because they were never raped, while some abandon the process or follow it through until the lie proves successful. Meanwhile, some who aren’t faking it will recant when they need not have. I don’t know what evidence could cause us to lean in the direction of one or the other of these things being more likely, since the only evidence we could use to adduce it is precisely the sort of thing examined in the study, perhaps combined with other claims we accept. I probably find it more likely than some others do that a person making a true claim would decide say it was made it up, since I’ve spent some time working with victims, observing how they are questioned by police, doctors, and others, and how they react. The tendency toward self-blame and giving up to make it go away can be very powerful.

    I conclude that we should remain open to the possibility that some rape reports, like any other crime reports, are false, even as we act on the reasonable assumption that most of them are true. I don’t see any contradiction here. The estimate of 2-10% seems as close to well-founded as we have right now, though I would like to see further research done. Personally, I’m outraged by suggestions I’ve seen in the post-UVA media that “victim’s advocates” think we should believe the truth of every rape report, full stop. I’ve worked as a victim’s advocate and I don’t think that. False reports undermine victims, worsen their suffering, and make it even harder for them to seek justice. It is entirely possible to listen to and affirm a person who is telling a story without rendering any sort of judgment on the facts of the story – that’s what counselors do every day. As the listener who has little responsibility to judge, my beliefs aren’t the issue; obviously this is different for professionals whose responsibility involves judging facts, like doctors or prosecutors or journalists.

  34. Sorry for the late reply, I’ve been very busy. First, as I already said above, I completely agree that people in charge of investigating rape allegations should always take them seriously. In fact, even if a very large proportion of rape allegations were false, which I don’t believe for a second, I would still think exactly the same thing. So let’s get that out of the way and focus on what we seem to disagree about.

    As I explained above, in a study like Lisak et al., 2010, the investigators get the prevalence of false rape allegations in the sample they examined iff (1 – r)/s = (n – x)/x. In turn, this can easily be seen to be equivalent to (1 – r)/s = 1/f – 1, where f is the proportion of reports the investigators classified as “false”. I think this tell us a great deal more than what you seem to realize. It tells us, depending on the proportion of reports that were classicied as “false” by the investigators, how much greater than the proportion of reports they did not classify as “false” even though the were the proportion of reports they classified as “false” even though they were true must be.

    Now, what I’m saying is that, together with what we know about the criteria used by the investigators to classify reports as “false” and information we have about the reports in the sample they examined, this can sometimes tell us something about whether they are likely to have underestimated or overestimated the prevalence of false rape allegations in the sample they examined. But let me explain why I think so and illustrate with some examples.

    The proportions r and s are determined conjointly by the criteria used to classify reports as “false” and the actual prevalence rate of false rape allegations. Of course, the latter is precisely what we’re trying to estimate, so we don’t know it, but unless we know the former we have no reason to take a study seriously at all. Thus, based on the proportion of reports the investigators classified as “false”, we sometimes have good reasons to believe they have either underestimated or overestimated, as the case may be, the prevalence of false rape allegations.

    That’s because sometimes, f implies a value of (1 – r)/s that is just not plausible, given what we know about the criteria that were used by the investigators to classify reports as “false” and the information we have about the reports they examined. When we are in a position to reach that conclusion will of course depends on what proportion of reports the investigators classified as “false” and what information we have about the criteria they used to do so and the sample they examined.

    In the case of Lisak et al., 2010, we know that, to the extent that Lisak et al. did not make a mistake in applying their criteria for classifying a report as “false”, the reports they classified as “false” have a more than 50% chance of being false. As I explained above, this means that, since they classified 5.9% of the reports they examined as “false”, for this not to underestimate the actual prevalence of false rape allegations, s would have to be less than 3.1%. And that’s on the assumption that 50% of the reports they classified as “false” were in fact true, which strikes me as implausible given what they say about the reports they put in that category.

    But, given how difficult the criteria they used made it for a report to be classified as “false” and what they say about the reports that were not so classified, my claim is that it seems extremely likely that, whatever s is exactly, it’s greater than 3.1%. If you have read Lisak et al., 2010 and disagree with that, I’m not sure there is much I can say to convince you. But, before you say that it doesn’t strike you as very implausible, I would like to make sure that you fully appreciate what it would take for this to be the case. It’s not necessary, for example, that most of the reports that were eventually withdrawn were in fact true, as you seem to suggest in your last comment. Clearly, even if a very large majority of the reports that were withdrawn were nevertheless true, s could still easily be greater than 3.1%.

    In fact, it’s incredibly easy for s to be greater than 3.1%, especially since reports that were not classified as “false” not only include those that did not proceed for one reason or another, but also those which proceeded though we have no idea how good the evidence was that a rape did occur and all those which the investigators classified as “insufficient information”. It really just takes a few reports to be false even though they were not classified as “false” in order for s to be greater than 3.1%. Of course, I can’t prove to you that s must have been greater than 3.1%, so if you still don’t find that implausible I guess we can only agree to disagree, but I just don’t think many people would disagree with me after reading Lisak et al., 2010.

    Similarly, given what Saunders explain about what law enforcement and criminal justice system professionals mean when they say that a report is false, I think it would be unreasonable to accept their high estimate of the prevalence of false rape allegations. Suppose they said that, according to them, 60% of the reports they had to investigate were false. It means that (1 – r)/s would have to be as small as 4/6, i. e. s would have to be greater than (1 – r) by a factor of 6/4. But, given how easy it seems to for a report to be classified as “false” by law enforcement and criminal justice professionals (according to Saunders, it looks as though there just needs to be some good evidence that the complainant lied, but as was noted above there are lots of reasons why a complainant might lie even though she really was raped), that would be really surprising. In fact, in that case, the mere hypothesis that s is greater than (1 – r), let alone by a factor of 6/4, seems rather implausible.

  35. I am sure there is something simple I’m missing here; I am coming at this issue from the perspective of someone who has worked with rape survivors and I’ve never studied it scientifically beyond reading the popular studies that have been done. You have to correct me if I’m interpreting this incorrectly. This is the first place where I lose the thread of your explanation. The point of your equation seems to be that if a study arrives at some number x and the “real” number represented by x is something different, it’s more likely that the “real” number falls on one side of x or the other depending on what proportion of n is represented by x. This is true for any study but in a way that doesn’t affect anything. In other words, it doesn’t tell us why we should assume that x isn’t the right number, or that error in the actual measurement of the content was more likely to occur on one side of x or the other. I’m missing why this is supposed to be a meaningful observation.

    I think Lisak et al. propose a more accurate means of determining the number of false reports, given that other methods tended too readily to dismiss reports as false. The study was well done in my opinion but it’s just one study at one school, and I’d like to see a lot more such studies before drawing firm conclusions about false report rates. You seem to think it’s obvious that some of the other categories classified as not-false must conceal cases where nobody was actually raped. I take it since the false report rate is underestimated if we find a only few of those cases, you conclude the rate was likely underestimated. This is the second place I’m losing the thread. Why should we assume there are enough of these cases that the false report rate was underestimated beyond the normal margin of error (Lisak says it could be as high as 10%)? I agree with you that there’s no way to “prove” it beyond a doubt, but are there are least some reasons to think it? Because what Lisak suggests about false reports is roughly in line with false reports of some other crimes, and I guess I’m not seeing why people would lie more often about rape than about other kinds of crimes. Or maybe you think that false reports are probably higher in all these cases? I surely can imagine someone reporting a rape that never really happened, then having second thoughts after confronting the gravity of the situation and refusing to “cooperate” further with the investigation. I mentioned some other weird cases of fabrication above that I can imagine too. I just can’t imagine them happening even close to as often as people report rapes that actually happened, and most of those reports don’t result in any further action taken.

    Anyway, I feel like I’m repeating the same things I said above – if I’ve missed something crucial then feel free to point it out to me, but otherwise it may simply be we have rather different ideas about what the facts about these cases are more or less likely to be.

  36. This is the first place where I lose the thread of your explanation. The point of your equation seems to be that if a study arrives at some number x and the “real” number represented by x is something different, it’s more likely that the “real” number falls on one side of x or the other depending on what proportion of n is represented by x.

    You make it sound like what I’m saying is that, if the proportion of reports classified as “false” by the investigators is too low, then the prevalence of false rape allegations is likely to have been underestimated, while if that proportion is too high the actual prevalence of false rape allegations is likely to have been overestimated. But that is not at all what I’m saying, because saying that would presuppose that we already have a pretty good idea of what the prevalence of false rape allegations is, when that’s precisely what we’re trying to find out.

    As I explain above, the inference I’m making is not from the proportion of reports classified as “false” alone to the conclusion that the prevalence of false rape allegations has been underestimated or overestimated, as the case may be. The inference is from the proportion of reports classified as “false” plus the criteria used to classify reports as “false” and the information we have about the reports that were examined by the investigators.

    This inference is made possible by the fact that the proportion of reports classified as “false” implies that a certain relation between (1 – r) and s must obtain in order for the study to correctly estimate the prevalence of false rape allegations, and what relation actually obtains between (1 – r) and s depends not only on the prevalence of false rape allegations, which we don’t know, but also on the criteria used to classify reports as “false”, which we do know.

    The information we have about the criteria used to classify reports as “false” and the reports examined by the investigators, I claim, is sometimes enough to reject as implausible certain values of (1 – r)/s, hence certain values of x/n in virtue of the relation I pointed out above. That does not mean that we can reject those values of x/n a priori, because if the criteria used to classify reports as “false” were different and the investigators still found a value of x/n in that range, we might have no reason to reject it as implausible.

    I think Lisak et al. propose a more accurate means of determining the number of false reports, given that other methods tended too readily to dismiss reports as false. … You seem to think it’s obvious that some of the other categories classified as not-false must conceal cases where nobody was actually raped. I take it since the false report rate is underestimated if we find a only few of those cases, you conclude the rate was likely underestimated. This is the second place I’m losing the thread. Why should we assume there are enough of these cases that the false report rate was underestimated beyond the normal margin of error (Lisak says it could be as high as 10%)?

    As I just explained, my inference is not from the fact that x/n is low, to the conclusion that the study underestimates the prevalence of false rape allegations. My inference is from that fact plus what we know about the criteria used to classify reports as “false” and the reports examined by the investigators. The very fact you cite about the criteria used by Lisak et al. to classify reports as “false”, although it means – as you point out – that many reports that would have been classified as “false” even though they were true other criteria been used, also means that, if there are enough false reports in the sample, then many of those will not be classified as “false”.

    Now, given what Lisak et al. tell us about the reports they did not classify as “false”, as well as the criteria they used to classify a report as “false”, and how large (1 – r)/s would have to be in order for their estimate of the prevalence of false rape allegations not to underestimate the actual prevalence, my point is that it most likely inderestimate it.

    If you still don’t follow the structure of my argument, I guess you should just answer that question: do you really think, given what Lisak et al. say in their paper about the criteria they used to classify reports as “false” and the reports they did not classify as “false”, that s can plausibly be less than of equal is likely to 3%? (And that’s what is required on the, in my opinion, quite implausible assumption that 50% of the cases they did classify as “false” were in fact true. If only 25% of the reports they classified as “false” were in fact true, it would have to be less than 1.5%.)? If you do, then presumably my argument is not going to convince you, but I can’t bring myself to think your judgment is reasonable. (I give some of my reasons to think that in my previous comment.)

    Another thing I would like to note is that my argument makes for a different, stronger conclusion than the rather trivial claim that Lisak’s estimate of the prevalence of false rape allegations is subject to a margin of error. When an estimate is subject to a margin of error, say 10%, it means that the actual value of the parameter that was estimated could be greater by as much as 10% than its estimated value, but also that it could be less by as much as 10%. On the other hand, the conclusion of my argument is that the proportion of reports classified as “false” by Lisak et al. is likely to be less than and unlikely to be greater than the actual prevalence of false rape allegations in the sample they examined. So, even if you don’t find my argument convincing, you must recognize that it purports to make a different kind of claims than what you apparently took it to be making.

    Based on what Lisak et al. say about the other studies they deem kosher (more specifically, about the criteria they used to classify reports as “false”), each of which classified as “false” between 2% and 10% of the reports they examined, I also have every reason to think that the same argument would show that their conclusion about the prevalence of false rape allegations most likely underestimate it. By how much I really have no idea. Which means that, as I already noted in one of my comments above, the actual prevalence of false rape allegations could still be, for all we know, somewhere between 2% and 10%. But we are only justified in thinking that it’s somewhere between (2+x)% and (10+x)% and we don’t know what x is.

    Note that, as I already explained, the exact same reasoning tells us that, when the investigators are law enforcement and criminal justice professionals, assuming what Saunders explain in her paper is right about what they tend to mean when they call a report “false”, their estimate of the prevalence of false rape allegations is very likely to overestimate the actual prevalence. Finally, I made no claim about how the prevalence of false rape allegations compares to the prevalence of false allegations for other types of crime, because I really have no idea. I’m also sure that my argument could also apply to studies which purport to estimate the prevalence of false allegations for at least some, and probably most, other types of crime, so it’s not as of I thought rape was somehow special in that respect.

  37. I thought Lisak’s study used the appropriate criteria to classify a rape report as false, while other studies have not always done so. This approach seems about as close as we can come to identifying an accurate false report number based on the best available evidence and standards, which is why I don’t find it as likely that they’ve missed many “actually” false reports in their result. I understand why you have a different view, and that’s fine, but I don’t think there’s any need for you to imply my judgment about this isn’t reasonable. The likelihood of false reports being hidden among the reports coded as true for other reasons (or vice versa) is one of those things about which people can reasonably disagree. I wonder if we see it differently because I’m viewing these as reports as opposed to cases? Just because a report is rightfully counted as true doesn’t mean it should lead to a criminal case or any subsequent punishment, for example.

    I took one of your points to be that we can’t rely on any of these study results as an accurate indicator of the likelihood of false reports, as they’re all likely to underestimate that percentage. What better rule of thumb do we have, though, than one based on the available evidence and presented with caveats about the potential for error? What these results tell me is that 1 in 20 reports of rape is likely to be false. Maybe the number should be 1 in 18, or 1 in 15. I don’t know for sure. However, we can assume with reasonable confidence, on the basis of such studies, that the percentage of false rape reports is low, and comparable to other crimes, but still high enough that great caution is required because a significant number of people are going to be falsely accused. Should adopting a more skeptical approach to studies like Lisak’s cause me to draw a different conclusion about how to proceed? We also know that rape remains at a very low reporting percentage, unlike other crimes of similar gravity. We want to encourage people to report while also recognizing that the number of false reports is high enough to be a significant problem – I certainly would be devastated if I were falsely accused of such a thing. Creating policies that serve both ends is tricky but extremely important for justice – perhaps that is something on which we will agree.

  38. I should add to clarify what I just posted: I know that a false report is quite different from a false accusation, so I don’t mean to imply that the numbers of these will be equivalent. Regardless, I am still worried about the possibilities of false accusation, as I would be with any other crime, but I’m far more worried about the fact most of these true reports will lead to a situation where a real rapist is still operating freely with impunity, not only for the sake of justice but because we know many of them are repeat offenders.

Comments are closed.