A new study published in Proceedings of the National Academy of Sciences reports that men and women are not equally receptive to experimental evidence of gender bias in STEM settings. Ian Handley and colleagues reported the results of three experiments. In the first and second experiment, men and women read an actual article abstract from a peer-reviewed scientific journal, which was accompanied by the article’s publication information and the first author’s full name. In the first experiment, participants were M-Turk workers; in the second, they were male and female STEM and non-STEM faculty. The abstract used in experiments 1&2 was from Corinne Moss-Racusin and colleagues’ (2012) PNAS article reporting gender bias in science faculty’s hiring decisions. In the first experiment of the new Handley study, men were significantly more likely than women to evaluate the abstract negatively. In the second experiment, male faculty in STEM departments displayed the same pattern; they evaluated the Moss-Racusin et al. (2012) abstract more negatively compared with female faculty in STEM departments. Amongst non-STEM faculty, men and women gave comparable evaluations. Finally, in the third experiment, Handley and colleagues replicated the main effect using a different abstract (from Knobloch-Westerwick et al. (2013)), which reports gender bias in reviews of scientific conference submissions. However, when the authors altered the abstract to report no gender bias, they found that women evaluated it more negatively than men.
This study has some obvious implications. The authors focus on the worry that no amount of evidence attesting to pervasive gender biases will be sufficient to convince skeptics, if gender biases are affecting skeptics’ assessments of that evidence.* They also discuss potential mechanisms driving these effects, in particular the idea that male faculty in STEM departments might find evidence of gender bias (perhaps implicitly) threatening (in accord with “Social Identity Theory”). More research on this is clearly needed.
What I want to consider briefly is the notion of “bias” at work in this study, and in coverage of it. David Miller, for example, describes the third experiment as showing that “women have their own biases” (here). Commenters have made similar points on Facebook. This is understandable, and is certainly true as a general point, since all human beings have biases, and women are human beings! Handley and colleagues saw a clear reversal in evaluations; when the abstract(s) reported gender bias, men were harsher, and when the abstract(s) reported no gender bias, women were harsher. The authors themselves point out that “individuals [not just men] are likely to demonstrate a gender bias toward research pertaining to the mere topic of gender bias in STEM” (3). One reason they conclude this is that the biases they detected were only relative to each other. There was no condition controlling for the effect of gender on participants’ evaluations.
However, it seems only right to conclude that both men and women are biased in these particular findings if there is no means to independently assess the quality of the evidence in the abstracts.** If it is true, though, that gender bias is pervasive in the domains described in the study materials, then women who give positive evaluations of studies finding gender bias, and negative ratings to studies not finding gender bias, are accurate, not biased.***
Similarly, if we presume that women (especially female STEM faculty) are more informed about research on gender bias than men, then we might give their abstract evaluations more credence.**** I’m grateful to Alex Madva for this point, who suggests an analogy: if a group of climate scientists negatively evaluated abstracts denying the existence of climate change, and a group of people who are not climate scientists rated the same abstracts positively, would we conclude that “everyone has their biases?”
Thanks to Alex Madva, Daniel Kelly, and Jennifer Saul for helpful suggestions on this post.
*Jennifer Saul has discussed similar concerns about the effects of implicit biases here.
** How might researchers at least approximate an assessment of the abstracts independent of rater-gender? Perhaps a team of independent mixed-gender reviewers? Or an average of all reviews, against which the ratings of men and women could be compared separately? Or simply compare the evaluations of abstracts by gender against the results of a meta-analysis of similar studies?
***Of course, gender bias could be truly pervasive in these domains, and it still be the case that any one study purporting to demonstrate gender bias is low quality. Note, though, that study participants were only asked to evaluate their agreement with the authors’ interpretation of the results in the abstract, the importance of the research, how well-written the abstract was, and what its overall quality was. If one believes that gender bias is pervasive, and reads an abstract reporting gender bias, one is likely to give positive answers to these questions. (Moreover, participants’ answers to these 4 questions were highly correlated, suggesting that they were answering based on an overall sense of the accuracy of the study’s findings.) Perhaps this is a limitation of the Handley et al. study. It would be interesting to find out if asking other questions would affect the results, such as “how rigorous do you think the study’s methodology is?” or “how much does this data contribute to the overall case for finding gender bias (or its absence) in STEM fields?”
****The authors did examine whether the amount of experience a person has had with gender discrimination correlated with their evaluations of the abstracts. (These data are found in the supplementary materials.) For women, they found no correlation. Interestingly, for men, they did find a correlation. The more (“reverse”) gender bias men reported having personally experienced, the more harshly they rated the abstracts.
You’ve gone to pretty some great lengths in this post. Some observations:
1. You write “if we presume that women (especially female STEM faculty) are more informed about research on gender bias than men, then we might give their abstract evaluations more credence.” In addition to STEM faculty, the result comes out in the lay population, where we have no reason to presume greater scholarly research competence. So quite unlikely that is what is happening. Moreover, there are scholarly reasons to debate Moss-Racusin et al that come with greater knowledge of their research.
2. You write “If it is true, though, that gender bias is pervasive in the domains described in the study materials, then women who give positive evaluations of studies finding gender bias, and negative ratings to studies not finding gender bias, are accurate, not biased”. Of course, X could be common in a domain, but that is separate from evaluating individual and specific pieces of research on X. It could be equally likely to be a demonstration of “bias”, as it could be to be “accurate”, to think that because there is a general problem of X, or perhaps even personal experiences with X, that a particularly study claiming something about X is therefore more accurate than not. That sounds how biases typically work to me.
3. One suggestion with respect to baseline epistemic assessments is just to use actual peer reviewed research failing to find bias in many other domains, or where the original papers failed to replicate or have been retracted due to errors. However as David Miller points out, these things are conspicuously missing from the original article…
4. The critical interaction showing that men are more skeptical about findings of bias towards women in particular, as opposed to say, more skeptical of social science research in general, or something else entirely we don’t know without controls, came out at “P = 0.046”. In other words, it could be that men happened to be more skeptical of findings in the three studies nothing to do with gender content.
Anon, thanks for your comments. Some brief replies:
1. I don’t know whether female STEM faculty are more informed about research on gender bias compared to male STEM faculty. My point was only that IF they are, then we might give their evaluations more credence. The same point holds for the lay population. If there are reasons to think that women in general are more informed about the relevant research than men, then we might give their evaluations more credence too. My intuition is that women (both STEM faculty and in the general population) might indeed be more informed about the research, but I don’t know.
2. Yes, I agree. I tried to make this same point in the third (***) footnote. I hope that future research can more clearly disentangle participant evaluations of research on gender bias in general from their evaluations of one particular study.
3. I agree that the lit review in the Handley et al. paper didn’t discuss relevant studies showing apparent absences of bias. Doing so might have helped frame their research. I’m not sure, though, why using only studies failing to find bias would be a good way of establishing a baseline assessment (rather than using a review of all of the relevant studies). Maybe I’m misunderstanding your point, though.
4. Maybe so, and hopefully future research can shed light on this. What you suggest would be a little surprising to me if it turned out to be true, since it strikes me that there are reasons to think that men might be more skeptical than women about gender bias, whereas I’m not sure what the reasons would be to think that men are more skeptical than women about social science in general.
Thanks for the replies, Michael. Some further thoughts:
1. So is there any evidence that is true? Hard to believe there is research competence in lay populations as you propose without evidence, and it certainly isn’t analogous to the climate scientists example proposed by Alex Madva. If the factor doesn’t explain untrained lay responses, it probably isn’t explaining STEM expert responses getting the same result either.
2. You agree that your third footnote undermines what I took to be your main point of this post, namely that questioning a null result isn’t biased because finding bias is the right answer?
3. The idea is to compare two results that are just as likely to be true. This paper compared reactions to peer-reviewed abstracts finding bias to imaginary vignettes finding no bias. One minimal step would be to use peer reviewed research as the basis for both abstracts.
4. They’re both pretty much conjecture, that’s the point of controls. The only control was pretty weak.
Hi again Anon.
1. I just don’t know what the evidence shows, but I don’t find it hard to believe that women know more about gender bias (and the evidence attesting to its existence) than men do, on average.
2. No, I don’t agree that my footnote undermines the point of my post. It is not clear (to me) how the participants understood the questions they were being asked. So while of course gender bias could be pervasive and it still be true that any one study purporting to show gender bias is low quality, the participants might have taken themselves to be answering a question about the evidence of gender bias in general. This is why I made some tentative suggestions for questions researchers might consider asking in the future.
3. The paper did not compare reactions to peer-reviewed abstracts finding bias to imaginary vignettes. In Study 3, it compared reactions to peer-reviewed abstracts finding bias to reactions to exactly the same abstract with a tiny wording change, indicating no finding of bias. (You can see the exact wording in the supplementary materials.)
4. Given the tremendous and pervasive history of sexism and gender discrimination, I have more confidence in the conjecture that men are, on average, more skeptical about gender bias than women are, whereas I have no reason to believe that men are, on average, more skeptical about social science than women are. Analogously, I’d venture that white people are more skeptical about the perpetuation of white-on-black racism than black people, on average, but I’d have no reason to think that white people are more skeptical about social science than black people.
[…] Write-up (and good unpicking) from feministphilosophers: […]