There’s a nice summary here of several studies of anonymization in peer review. The summary itself comes from a study showing that 74-90% of reviews (reviewers were invited to guess author identities) contained no correct guesses of author identity.
reviewers with author information were 1.76x more likely to recommend acceptance of papers from famous authors, and 1.67x more likely to recommend acceptance of papers from top institutions…when reviewers knew author identities, review scores for papers with male-first authors were 19% higher, and for papers with female-first authors 4% lower.
I think this has some potential to be misleading. Firstly, the study linked to is a study of the effectiveness of the double-blind process at actually hiding the identity of authors from referees. It doesn’t investigate how effective anonymizing is at removing bias; the quoted section is a precis of another paper, looking at submissions in language evolution. That paper is at https://academic.oup.com/jole/article/1/2/163/2281905 so presumably the link should go there – though it’s from 2 years ago, so not really a “new study”, and it’s in one specific field (not philosophy) and concerns conference submissions, not journal articles.
Secondly, studies in this field have found conflicting results – as the language-evolution paper makes clear in its own introduction. I think the most one can conclude about whether anonymizing reduces bias, on the basis of the data so far, is “maybe” and “it depends on the field and the specific context” – though, given the obvious theoretical case that anonymizing ought to reduce any bias that’s present, it still looks like a good idea in those contexts where there’s no significant downside.
You’re right. My post was hastily and misleadingly written. Will fix.
Actually I misread it too (demonstrating how easy it is to do this when you’re writing in a hurry between other tasks!) Here’s the fuller quote from the article:
“In a controlled experiment with two disjoint program committees, the ACM International Conference on Web Search and Data Mining (WSDM’17) found that reviewers with author information were 1.76x more likely to recommend acceptance of papers from famous authors, and 1.67x more likely to recommend acceptance of papers from top institutions.A study of three years of the Evolution of Languages conference (2012, 2014, and 2016) found that, when reviewers knew author identities, review scores for papers with male-first authors were 19% higher, and for papers with female-first authors 4% lower In a medical discipline, U.S. reviewers were more likely to recommend acceptance of papers from U.S.-based institutions”
So it’s actually summarizing specific results from three pieces of research: one controlled study at a specific computer-science conference one series of datasets from a specific evolution-of-languages conference; one more general bit of research in medical-science journal submissions.
The section you quote is a summary of the results of the computer-science controlled study. I misread the paper and confused it with the second study – sorry! I don’t think it’s correct to say that it’s a summary “of several studies”, though -I think it’s just that one study.