Jules Holroyd, Sarah-Jane Leslie and Sophie Stammers on Radio 4’s Analysis programme!
Here’s a link for the radio show.
And here’s a short accompanying article.
The Chronicle of Higher Ed has an article on implicit bias which raises questions a number of philosophers have pursued. The general question is about the relation between implicit bias and biased behavior, with specific reference to the IAT (Implicit Association Test). The conclusion that is emerging is that getting rid of implicit biases will not get rid of the biased behavior. The point is made that from the fact that bias causes biased behavior, it does not follow that getting rid of bias gets rid of the behavior.
While that sounds true, we should know more. One reason could have to do with what ‘implicit bias in a person’ really is. Edward Machery has a lot of work on this question. But in general we have known for sometime that that beliefs tend to occur in networks, and changing the belief in one network may leave it more or less intact in another part. Another could be that the biased actions or their results are held in place by structural factors in the society. I am not sure who all have pursued this line of investigation, but Sally Haslanger has a lot of excellent work in this area. In my contribution to the 2-volume OUP work on implicit bias edited by Brownstein and Saul, I raised a question about this, and look at some recent psychological literature. Once one starts to look at the structural factors underpinning racist actions, for example, one can see that the needed change has got at least to include a change in social structures. (You can find a good article by Machery in the same 2-vol edition. Google will reveal lots of work by Haslanger.)
It would be great to get more reading suggestions on the structural side in the comments.
For people who have been following this literature at all closely, one of the biggest surprises is that one of the originators of the supposedly fundamental IAT seems to have changed sides! That’s Brian Nozick of the University of VA.
Folks here know that I (like many of our bloggers) am very interested in implicit bias. But I get really angry when I seeing it invoked where it’s really not what we should be talking about. Like the Terence Crutcher murder.
In the wake of yet another killing of an unarmed black person by a police officer, we are once again hearing about the importance of fighting implicit bias. Now, I am completely on board with the thought that it’s important to fight implicit bias: I just published two co-edited volumes on it. It’s important, and it explains a lot. But it does not explain this murder, and it is the wrong place to look for a solution to the problem of police shootings of unarmed black people.
If you’ve picked up bits and pieces on implicit bias and wish you had a more systematic, connected-up view, think of takng advantage of this temporary reduction in price, from $13.99 to $1.99. This is for the e-text, and the reduction is at several different places.
Apologies for the text placement. I’m failing at changing it.
The UK government has decided to hastily throw together a framework for assessing teaching quality, which will be linked to funding. One key feature will be a heavy reliance on existing measures of student satisfaction like the National Student Satisfaction survey. Jules Holroyd and I have an article about this out in the Guardian today, drawing attention to worries about implicit bias in student satisfaction scores (though also noting other problems with these measures!).
A key part of the argument seems to be a comparison of REF ratings and citation metrics, which makes me a bit iffy about it. Still, well worth a look. Two bits that leapt out at me:
In general, if the actual REF grades are compared with those predicted based on citations, there is no apparent correlation. However, there is an evident trend that more theoretical/mathematical areas are favoured under REF compared with metric predictions, whilst more applied and human-centric areas are disfavoured.
In terms of the world rankings, an output in a more applied area, on average, needed to be in the top 0.5% (top 1 in 200) of its discipline to obtain a 4*, whereas in theoretical areas it was sufficient to be in the top 5% (1 in 20). That is, our panel scores did not at all reflect the assessment of global peers.
Finally, both the sub-areas of computing and the institutions disadvantaged by REF are those that tend to have a higher proportion of female academics. That is, the apparent disciplinary and institutional bias would be likely to create implicit gender bias. Indeed, amongst other things, HEFCE’s “The Metric Tide” revealed that the computing sub-panel awarded substantially more 4*s (top grade) to male authors than female ones, an effect that persists even once other explanatory factors considered.
In his opinion on the Texas Department of Housing and Community Affairs vs. Inclusive Communities Project — which interpreted “disparate impact” (i.e., discrimination without intent) as a legitimate cause of discrimination — Justice Anthony Kennedy writes, “recognition of disparate-impact liability under the FHA also plays a role in uncovering discriminatory intent: It permits plaintiffs to counteract the unconscious prejudices and disguised animus that escape easy classification as disparate treatment.” It’s terrific to see the Supreme Court seeming to recognizing implicit bias as contributing to discrimination. Also, as this Slate article points out, it also raises interesting questions about moral responsibility and implicit bias.
The study, published in Science, began with two Pavlovian-style conditioning exercises designed to counter race and gender biases. In the first, participants were shown female faces with words linked to maths or science and in the second, black faces appeared with pleasant words.
During the tasks, two distinctive sounds were played – one that came to be strongly associated with the gender pairs and the other with the race pairs.
Following the training, participants took a 90 minute nap and once they entered a deep sleep, without their knowledge, one of the sounds was played repeatedly.
After the counter-bias training exercise, and before the nap, people’s bias tended to have fallen, but without the extra cues during sleep, their level of bias had almost recovered to baseline after the nap. However, when participants were played the sound cues during sleep, their bias scores reduced by a further 56% compared to their pre-sleep score. Their scores remained reduced by around 20% compared to their initial baseline when the participants were tested one week later.
I am filing this one under “important if true”.
There’s some really interesting discussion starting to take place about the pros and cons of various kinds of ranking systems for philosophy. A couple of years ago I wrote a paper on the potential for implicit bias in both the PGR and the REF. I thought it might be useful for that paper to be a part of these discussions, so I’m posting the penultimate draft of it here. (Also, I can’t figure out how to use my university’s newly updated CMS. Grr!)
The paper is “Ranking Exercises in Philosophy and Implicit Bias”, in Journal of Social Philosophy, 43:3, 2012.
It was only after publishing the paper that I noticed another interesting difference between REF and PGR. I’m no great fan of the REF– it has lots of problems, but it does have the nice feature of not weighting an areas of philosophy more heavily than any others. Whatever area your work is in, it’s only ranked by people in your area, and there’s no overall ranking of departments, except in so far as various competing ones can be (and are) arrived at through the rankings of work, impact, etc. So there’s no case to be made that a department will do better in the REF by hiring an analytic metaphysician than a pragmatist. Departments are free to just go by quality and teaching/supervising needs, without worrying that they should favour particular areas for the sake of the rankings.
Published in the Journal of Applied Psychology, here.
We propose and test a new theory explaining glass-ceiling bias against nonnative speakers as driven by perceptions that nonnative speakers have weak political skill. Although nonnative accent is a complex signal, its effects on assessments of the speakers’ political skill are something that speakers can actively mitigate; this makes it an important bias to understand.
There are interesting tie-ins with an earlier post in this blog on bias and foreign languages.