Another piece of the bias puzzle

For better or for worse, studying and working with philosophy does seem often to encourage introspection. Our discourse – at least for many of us – has lots and lots of references to our intuitions. And yet that leaves us with whole groups of philosophers who say things such as “I only care about quality, not race or gender,” and whose behavior puts such statements in question. E.g., they present themselves as simply interested in the quality of the work graduate students produce, but at the same time, they don’t give the time and attention to the work done by women students. Or they wouldn’t begin to consider hiring an African American, because they don’t want to participate in a practice that supposedly grossly inflates black salaries, etc, etc. As if they knew much about black salaries.

How in the world does one of the very most introspective of disciplines end up with such self-deceivers?

There’s a somewhat new answer that’s recorded in a New Yorker blog. Not maybe the most academic source, but the research is. Here’s the answer: bias is not revealed by introspection. People whose knowledge of themselves is more due to introspection than reflection on their behavior are more apt to be ignorant of their biases.

Well, duh!

So for starters we need to make biased behavior in the community much more visible and noticeable. And there’s been some progress on that.

Another thing one notices is that, generally speaking, you can get away with a lot of discriminatory behavior in academia without getting called out on it. Further, some kinds of censoring just drives the behavior in deeper. We need a combination of supportive, informative behavior and rewards. And there are people working on just this.

20 thoughts on “Another piece of the bias puzzle

  1. I recently and relatedly drafted an article about implicit bias, and although it will be published, the editors expressed concern that I provided no road map to its disruption and/or reduction. Articles like this have me absolutely blue that efforts to unseat biases in individuals are doomed. So if/when you get a chance, I’d really appreciate some verifiable good news, some studies with high confidence in results, to the effect that biases are demonstrably unseatable. I really am familiar with the literature on bias on the whole, but most of it is excellently descriptive of how wide and deep biases are, not really pointing to routes of remedy.

  2. I have read nothing on the subject of implicit bias, except what I have learned from following this blog, but I hope and I believe that as a result of reading about it, I have managed to identify some forms of implicit bias in my own behavior and to watch for them, so that they do not distort my vision of the world.

    Changing them seems to be a question of watching oneself. There is nothing magic about that, but nothing that is impossible either.

    As to the post above, introspection can only make one more aware of what is already in one’s “head”. To learn something new (in ethics), one has to talk to others and/or to observe them. The problem is of course to select the right others: strong implicit biases would tend to lead one to select the wrong others to learn from.

  3. Kate, there is some evidence that the knowledge that someone will be keeping an eye on you reduces the effects of implicit bias.

    I read a study of Major League Baseball umpires. Their ball/strike calls were more favorable to the pitcher when the pitcher was the same race as the umpire. (Very small effect, but the sample size was enormous so the significance was high.) This effect went away in games in which the umpires knew there was going to be Pitch f/X review.

    So, maybe the affirmative action monitoring committee (if there’s anything like that at your university) is having an effect, even if it never overturns any decisions and seems sort of toothless.

  4. Just a quick remark about the Lehrer piece: it does not describe the actual results of the main new article it purports to be covering, namely “Cognitive Sophistication Does not Attenuate the Bias Blind Spot” in the Journal of Personality and Social Psychology, by Richard West, Russell Meserve, and Keith Stanovich. A clear and concise summary of the chief discrepancies between Lehrer’s blogpost and the actual article can be found here:
    http://lesswrong.com/r/discussion/lw/d1u/the_new_yorker_article_on_cognitive_biases/6tgn.

    Probably the most important discrepancy is that, for most (well-studied) cognitive biases, cognitive sophistication is NOT positively correlated with bias. What makes the new article interesting is that it found a particular kind of bias that IS positively correlated with cognitive sophistication. (Actually, the authors call it a ‘meta-bias’: it’s the belief that I am less susceptible to biases, such as framing effects or the base-rate fallacy, than other people are.)

  5. Interesting study, Jamie; do you remember where you saw it?

    In answer to Kate’s question about whether there is any research which suggests that implicit biases are demonstrably unseatable:

    Some studies suggest that one effective way of diminishing the impact of implicit bias is through exposure to what are known as counterstereotypical exemplars: members of the negatively stereotyped group who don’t fit the mold. Mere exposure to counterstereotypical exemplars—even simply spending some time imagining a counterstereotypical exemplar—can influence automatic associations (e.g., Blair, Ma, & Lenton, 2001) and implicit bias against specific outgroups (e.g., Dasgupta and Greenwald, 2001).

    One initial criticism of this work (with respect to practical applications) was that the exposure to counterstereotypical exemplars produced only fleeting results; in the Dasgupta and Greenwald 2001 study, for example, the decrease in implicit bias lasted only approximately 24 hours after the exposure to the counterstereotypical exemplar.

    But there was a compelling 2004 study in which Dasgupta and Asgari examined the extent to which college-age women harbor negative stereotypical associations between gender and leadership as a function of attending an all-women’s versus a co-educational college. Given that all-women’s colleges typically have a higher ratio of women to men in leadership roles and traditionally male disciplines, Dasgupta & Asgari predicted that students at such colleges would be less likely to retain non-conscious beliefs associating leadership with men. Their year-long longitudinal study clearly demonstrated that female students’ automatic beliefs about the leadership capacities of their ingroup are affected by exposure to female leaders and faculty members on campus. Women at single-sex colleges did not differ in their implicit endorsement of gender schemas related to leadership at the point at which they entered college, but by their sophomore year their implicit beliefs had diverged significantly from those of their peers at co-educational colleges. This effect was mediated by the number of female professors the students had studied with, and the exposure to women teaching traditionally male-dominated courses (e.g., economics, science, mathematics), suggesting that the frequency of interaction with female leaders, and the presence of females in non-traditional roles, were also important factors. Furthermore, participants’ conscious self-reports of beliefs about gender did not change substantially between the first and second year. And the results in this case were much more durable, lasting many months after the exposure.

    In other words, the work suggests that one prescription for implicit bias is to create environments in which counterstereotypical exemplars are present, abundant, and visible.

    What this and related research underscores is the power of local environments—specifically, the absence or presence of counterstereotypical exemplars in one’s environment—in shaping implicit beliefs about underrepresented groups.

    And so, to return to the Anne’s original question (nice post!) about what might attenuate the bias blind spot, which is the non-conscious tendency to be unaware of, and thereby fail to compensate for, our own biases:

    While I don’t know of any research which explicitly addresses the question of how to diminish the impact of this particular cognitive failure — and while I fully appreciate the fact that the mechanisms behind the effectiveness of counterstereotypical exemplars are poorly understood and probably very complex — I think it is reasonable to think that there might be a version of the counterstereotypical exemplar solution to the problem of the bias blind spot.

    If we assume (just grant me this for the moment) that the problem with the bias blind spot arises because we think of those who are biased as clearly ”other” — i.e., members of an outgroup or group with which we are not affiliated — then we can understand the bias blind spot phenomenon as a product of intercategory differentiation. ”They” (the people who are biased) are not like ”me” or ”us” (the cognitive sophisticates who would be highly unlikely to succumb to such irrational biases).

    The counterstereotypical exemplars used in the Dasgupta & Asgari study appear to work by challenging the process of intercategory differentiation. When we differentiate between categories, we identify a salient feature Q which can be used to sort, or classify. In some cases, this identification may itself be implicit. Eagly and Karau’s 2002 work on role congruity theory, for example, argues that gender-based stereotypes about leadership stem not from explicit categorization of leaders along gender lines, but rather from the incongruity between the perceived characteristics of women (compassionate, self-sacrificing, etc.), and the perceived characteristics of good leaders (dispassionate, ambitious, etc.).

    In any case, whether they work by challenging the actual sortal Q, or by forcing implicit cognitive resolution of the apparent incongruity, it seems that the value of counterstereotypical exemplars, according to this line of thought, lies in the fact that they can serve as an obstacle to the relevant pernicious dimensions of intercategory differentiation. (This, by the way, would mean that the most effective counterstereotypical exemplars will be those who are not only accessible and pervasive, but are also: (1) exemplary members of the outgroup, and (2) able to create the cognitive dissonance required to cause a shift in implicit and, eventually, explicit attitudes. It is in this second requirement, I think, that the key to understanding the value of frequency of contact lies. Whereas we can dismiss a singular counterstereotypical exemplar, or even a whole group of sufficiently similar counterstereotypical exemplars as an outlier or exceptional subcategory, we cannot dismiss such exemplars when they rise to the level of the mundane or familiar.)

    So: one way of attenuating the bias blind spot would be to the challenge the intercategory distinction that I have assumed might be one of the factors at the root of the bias. And one way of challenging the intercategory distinction might be through exposure to counterstereotypical exemplars, i.e., members of the outgroup (those who are biased) who are sufficiently like the ”me” or ”us” (the putatively unbiased group) in the relevant ways. In the case that Anne discusses — the bias blind spot of well-educated philosophers who have implicit biases against women in philosophy — the counterstereotypical exemplars would, I suppose, be philosophers who admit to having biases, particularly those who are perceived as least likely to be biased.

    Let me step up as a so-called counterstereotypical exemplar, then:

    I recently finished my PhD in Philosophy at a Leiter top-ten research institution.
    My research interests lie at the intersection of modal logic, the metaphysics of modality, epistemology, and philosophical logic.
    I studied math and philosophy as an undergraduate.
    I am an Associate Professor of Philosophy at a public university.
    I’m female.

    And I have implicit biases against women in philosophy.

    I work hard to identify them and compensate for them on a daily basis.

  6. This is very thought provoking. Having only studied undergrad, I don’t know much about the hiring process for professors. I have, however, noticed that people who are allegedly only concerned with quality tend to choose to teach works from the majority make white canon. I was incredibly lucky to graduate with my BA in literature from a school where I had a bunch of hippy liberal professors who thought the canon was limiting and kinda bullshit. I got to read the likes of Toni Morrison, Edwinge Dandicat, and others. Plus a mix of semi forgotten 19th century writers in my senior seminar to introduce me to primary research. Would that we all had such kickass educators on our side.

  7. Heidi, yes, it’s a study published (after being discussed for years as a working paper) in American Economic Review, here. Free download of the pdf from the authors, here.

    The authors are interested in workplace discrimination, in particular evaluator bias. They suspected that there are interior dynamics that make the visible signs of bias disappear in econometric tests. The study uses umpires as the ‘subjective evaluators’ and pitchers as the workers.

  8. Kate: I have heard some people mention that multiple data points help to ward off the influence of bias. This crops up in the UK report, here: http://www.swipuk.org/notices/2011-09-08/, and I heard psychologist Brian Nosek describe this to the medical school at Stanford.

    Nosek was advising them in their effort to be fore-runners in eliminating social bias in hiring and promotion, and he recommended that in advance there be an extensive breakdown of the specific qualifications that is used to evaluate a candidate. This is pretty much what Windsor has done for 20 years, but we’re considered to be quite progressive that way, at least in our formal methods — the difficulty has been that people don’t appreciate the reasons behind the practice, and at the time they were implemented there was much less to work with than there is now.

    I still haven’t found an article that prescribes the data point breakdown, or even leads in that direction. (Help anyone?) What I do hear is that only very specifically tailored techniques work. However, I’m undeterred by Tim’s comments at the CPA which I believe upon further consideration may reflect a fairly individualist set of assumptions. I think the dialectical nature of argumentation, and especially the fallacies approach, may be of some — if only some — assistance as a general tool.

    Not only does dialogue break away from the inadequacy of introspection, it also makes room for data points by way of argumentation analysis. I suspect there may be great potential in dialogue techniques or standards — they’d have to be accepted as standards, I think. There is a danger of a conscious or unconscious backlash, certainly; so setting up the standards would require great care, but it looks to me like there is good work to be done there. It’s interesting that in West’s study (at least as reported in NYer) those who rely on dialogue a lot are more susceptible to bias. That suggests to me that dialogue may also be a good route towards remedy, at least in the sense of limiting the effect of our biases on our actions.

  9. Kate, I’m working on a part of this topic, and I must say that your question induced considerable fatigue. It’s all so hard!

    I think one thing that’s hard is that most of us thinking about bias tend to employ a model that is really unhelpful. We take a Davidsonian picture of reasons for action – a belief-desire pair – and in the case of implicit bias hypothesize that some part of that is out of consciousness. Edouard Machery has some telling criticisms (not in print, as far as I know) of this model, but there’s another set of objections I’ve been trying to push, which is that beliefs are not natural kinds and are really unsuited to be core elements in any theorizing we do.

    What we do need is to understand the underlying mechanisms better. One person who has tried to do this is Read Montague (along with Anne Harvey, who works in his labs) and you can find his fascinating work on using the art viewing paradigm on his website. I think that Montague is getting at one of the things we very seriously need to understand better; namely, the social dimensions of implicit bias, by which I mean all the stuff that helps create and how biases in place. Read also has some ideas about what to do about some biases that are interesting. It may be reading his stuff that made me realize that there are few mechanisms for punishing bias in academic and to some extent medicine. (Note also Drcateh’s comment on getting away from the individualistic picture, which I think our ideas about belief definitely encourages.)

    Bias in medicine has been studied quite extensively by John Dovidio. We have a couple of references to his work on this site (just use the search function). I think it is the first reference that is to an article that has a list of bias-mitigating things we can do. As his work shows, we can certainly get somewhere without really trying to get how the brain is performing, but it is all pretty piecemeal.

    There also interesting work on conditional intentions. It turns out that if I tell myself, “I will grade the next paper by a person of color fairly,” one greatly increases the chances of doing that. But now, of course, we’re back to the problem of needing to know we have biases.

    I think Jenny Saul’s paper on her site also has some interesting comments about mitigating bias.

    We also have a reference to some work on how bias reactions can flourish in structured community. I’m not sure how to find it; it might show up if one searches for “nsf” or, oddly, Michigan.

  10. Thanks all!, and thanks especially for the material points to alleviate my discouragement. I did not mean to spread my fatigue to Anne. :-) But I get discouraged.

    I have read some of the readings recommended in these comments. It’s just that they do not always add up to reason to hope. But since they sometimes do, I shall Keep Calm And Carry On!

  11. Kate, my fatigue was due to my own worries, corresponding to your discouragement.

  12. Jender, thanks so much. That’s what I was thinking of with ‘conditional intentions.’

  13. Thanks Anne! The Davidson reference is really helpful. I just spoke to a cognitive psychologist too, Kate, who assures me that the level of action is analyzed quite separately from the level of belief, and she agrees that there is significant room for remedy in action and even the formation of specific beliefs.

    With regard to structured communities, Anne, you might mean Cecilia Ridgeway’s 2011 book “Framed by Gender”? She looks at how more structured work environments mitigate gendered behaviour patterns. She advises that there is little hope for gender stereotyping except to renovate the family and build a generation of differently biased kids. But she’s only talking about direct change of the bias, not negotiating its effects at the social/action level.

  14. Drcateh, by structured societies I was thinking more in terms power relations that can lead to insider/outsider divisions. Dovidio has some observations that seem to me to capture a lot that goes on in philosophy. E.g., insiders don’t discuss the work of outsiders, they don’t remember their ideas, etc.

    In thinking about beliefs, I had more in mind l philosophical approaches. Has your psych colleague written about this at all?

  15. Anne: Unfortunately my cognitive psychologist’s own research is on semantics. Thanks for clarifying about the structured communities — more pieces to a very complex puzzle. — Cate

  16. Cate, interesting. Certainly the picture emerging from neuroscience puts aspects of it there. One thing I’m thinking about is the variety of inputs into the decision making process. It’s easy to think – as some of the literature has it – that it implicit bias is basically employing stereotypes. I think that’s probably not true. Montague’s work shows, I think, that rewarding someone can bias them toward one and that’s really not because of stereotypes. Similarly, being mean. Insider-outsider bias might draw on getting rewarded in addition to, or rather than, being influenced by stereotypes.

Comments are closed.