On Mitigating Bias in a Job Search

I write a lot about implicit bias, and about how we should all be taking steps to mitigate it. I’m also Head of Department. So when I was placed in the position of hiring for two permanent posts, I decided to take the opportunity to put in place what seemed to me, based on what I know about implicit bias, to be the best practises. It went remarkably well, so I thought I’d report on what we did, and how and why we did it. And also on some of the difficulties, because it wasn’t QUITE as smooth as it could have been.

1. What we had candidates send: Anonymised CV and writing sample, with identifying information on a detachable cover sheet. In keeping with widespread UK practise, we only asked for names of referees at this stage, not references.

Problems:

a. Detachable cover sheet only actually makes sense if these things are going to be printed out, and if they’re not being submitted electronically. I’m not sure why I asked for it, but I wouldn’t do it again. For electronic documents, removing it is a tedious bit of editing. Just ask for anonymised CV and writing sample.

b. Candidates weren’t always sure what was meant by ‘anonymised’ or ‘identifying information’. Some worried they should leave off their publications, or place of PhD, or employment. Much better to put in brief clarification of what to leave in. [What we actually wanted left off was just name and email.]

c. The e-recruitment system sticks candidates names into the file names of every file downloaded by those on the committee, adding *another* bit of anonymisation to do. Unless you have a system which doesn’t do this, you’ll need a bit of administrative help retitling all of these. (And we really should advocate for systems that don’t do this!)

It IS vital to have a bit of administrative help– someone who can check to make sure that everything actually is anonymised, who can also write assign numbers to the candidates and keep a list of name-number pairings.

2. How we long-listed: We long-listed on the basis of CV alone, to get down to 15-30 candidates. Our focus was primarily on meeting area needs and publication record.

3. After long-listing, we read anonymised writing samples. We also sent away for references. This decision was the subject of debate. I favoured waiting until we’d shortlisted, because of well-documented biases in reference-writing, and also because of national differences (e.g. US references are MUCH more glowing than UK ones). However, some wanted references to be used in shortlisting. Our compromise was to have references sent to a special email account, to which committee members would only be given access a couple of days before the shortlisting meeting. At that point, they were also given access to the name-number pairings.

4. How we shortlisted: Shortlisting was based on full information: CV, writing sample and references. Fascinatingly, though, even those who had advocated the use of references in shortlisting found them to be not of much interest after close examination of CV and writing sample. All felt that use of references had in the past been a merely apparently useful short-cut, which probably served to short-circuit proper consideration of more significant information. We also found that in many cases we had failed to recognise the written work of those we actually knew, so the anonymity had worked remarkably well.

You might wonder why we didn’t anonymise references. One reason is that it’s a lot of work– need to eliminate every occurrence of name or gendered pronoun. Another is that if a reference is anonymised you can’t try to take into account the tendency for referees to e.g. describe women as ‘hard-working’ and men as ‘brilliant’.

5. How we hired: Our process is a long one by UK standards and a short one by US standards. The main events are job talk (1.5 hours, including discussion) and interview, though there are also a couple of meals. The most important bias-fighting measure I took at this stage was in the discussion of each candidate post-interview. I did not allow overall gestalt evaluations or comparative evaluations until the very end. Instead, we agreed a list of topics we would discuss about each candidate in turn. I listed these on a whiteboard to make sure they got covered in every case. We carefully distinguished such things as written work, job talk, and discussion period so as not to give any of these undue weight. (There’s a good case to be made that written work is a better indication of research ability than job talk under immensely stressful conditions, including in many cases stereotype threat. Yet nonetheless it’s all too easy to focus more on job talk.) Only after each candidate was discussed in detail did we turn to comparative judgements. This lead to much richer and more useful discussion than I’d experienced before in such circumstances (and I’ve lost count of the number of hiring committees I’ve been on!). In both cases, we had very strong fields, and therefore extremely difficult decisions to make. But we all felt that this process helped enormously in making these decisions.

21 thoughts on “On Mitigating Bias in a Job Search

  1. Should “We also sent away for writing samples” under 3 read “We also sent away for references”?

  2. Out of interest, with respect to 1b, can please I ask what you were interested in (i.e. what would the clarification have said in this instance)?

  3. Grad– yes,thanks! Have fixed.

    Aidan– all we wanted left off was name and email address.

  4. Jenny, I am so glad you put this up. I kept trying to remind myself to ask you to when I heard about your efforts to mitigate bias in the job search. There’s got to be a lot to learn, I thought, from even the first attempt, given how informed the source is.

    I’m hesitant to ask too many questions, since I’m assuming you haven’t done a statistical analysis of the results, at least yet, if doing one on a relatively small sample could be that revealing. One thing, though, I’m wondering is whether there was much to learn about the selection of candidates in comparison to the professional reputation of their PhD departments. (I really do not want to refer even covertly to the PGR, since the discussion of it is getting fraught. One might have an independent sense of whether those on the final list were from the famous departments or not.)

  5. We didn’t eliminate knowledge of degree-granting institution, so it was available. We have always had widespread agreement that work is a much better criterion than “pedigree”. However, the lateness of the references in the process probably made it even easier to just focus on the work. Certainly it felt like that subjectively.

  6. Also: Because of the way that reading of writing samples was separated from reading of CVs, people often weren’t aware of which sample went with which anonymised CV (although the information was available to them). And people did express surprise about the weakness of some papers by people from starry institutions, and the strength of some papers by people from lesser-known places.

  7. Jenny, at some US institutions, there are various stages at which there is oversight of our process from the Equal Opportunity Affirmative Action office, Human Resources, and/or the Dean’s office. This can require us to flag women and minority candidates at some stage of the process for affirmative action considerations, which is already difficult since names are not a completely reliable heuristic and there is seldom self-identification in the files. I’d be interested to hear your thoughts on how fully anonymous files at the early stages can be made consistent with these requirements. Or, should we think of the processes you describe as alternatives to these kinds of practices?

  8. Really interesting question! It is tough to see how that could be done during the long listing process that I’ve described. However, there’s no problem with doing this during the move from long list to short list, since at that stage we did (eventually) remove the anonymity.

    There are multiple reasons for flagging the candidates from underrepresented groups. One is to take an extra look to make sure they aren’t getting short shrift just for being from those groups. Anonymising is, I think, a viable alternative, and probably preferable. Another is that they are likely to have encountered barriers that other candidates won’t have, and you’d like to take these into account. Anonymising makes this impossible. A third reason is that you want to give preference to such candidates. Anonymising makes that impossible too. BUT in the UK it is only legal to give preference to a member of an underrepresented group as a tie-breaker. So for us anonymising the first stage but then allowing a tie-breaker during the move to shortlist would work fine. (No need for tie-breakers at longlisting– can always just add one more to long list.)

  9. Very pleased that you’re using substantive work rather than “pedigree” in judging applicants.

  10. I was one of the applicants for one of the positions and made the long list but not the shortlist. The procedure on the whole was much better than for other jobs. As soon as they decided on the shortlist, the SC sent a very sensitive note saying I wasn’t shortlisted. I appreciate the speed and timing of doing this (i.e., at the shortlisting stage rather than when the hire is made).
    The only downside is that the emphasis on the writing sample, and my knowledge (via my advisor) that I was long listed but not shortlisted, resulted in a nagging feeling that the writing sample was not good enough…

  11. […] called “the first pop science book ever” in Forbes. 7. How to mitigate bias in philosophy job searches. 8. “the statutes of [Oxford] required … an original contribution to knowledge. But what […]

  12. Many thanks for posting this! It was really helpful to hear all of the details. Is there a description of a similar process for grad admissions somewhere? (Sorry if this was discussed recently and I just completely missed it…) One difference that I worry about in that context is that it might be harder to rely as much on writing samples because some students seem to be getting much more help with them from faculty at their undergrad institutions (I think some of those samples maybe feel “polished” in a way that might sway people without really mattering). But I don’t know what would be better, so…

  13. (And one might worry that help with writing samples would track various biases and so on).

  14. Thanks so much for posting this, Jenny. I was wondering if you might elaborate on the parenthetical claim about job talks in the final paragraph: how would you make the case that job talks aren’t as good an indicator as written work, and that it’s too easy to focus on them? Thanks!

    I fully agree with that, by the way, and have a hunch that the hiring process would be fairer if we replaced the standard job talk with something else (though I don’ t know yet what that something else would be.).

  15. Some of what you describe sounds great.

    I’m surprised to learn that you don’t read writing samples until the stage of “15-30” applicants. How many applicants were eliminated to reach that stage in your search? Unless those being eliminated were obviously unsuitable for the position (because they don’t have the advertised AOS perhaps?) that seems like a critical thing to skip over in even an initial screening of files. At my institution we receive at least a couple of hundred applications except when doing highly specialized searches. We read writing samples from those meeting the advertised minimal qualifications for the position as the primary means of evaluation.

  16. When I’ve been on search committees, our goal has been to hire someone amazing without spending too much time on the process. I’m under no illusion that we’ve always managed to identify the “best” applicant, but the ones we’ve hired over the past 10 years have indeed been SO amazing that had we spent any more time on the process, diminishing returns would have set in very quickly. And by any reasonable standard, it would be a bad result had the ones we did hire not get jobs– so if extra effort had resulted in our hiring someone different, it would only have changed the identity of the undeservedly shunned… and probably forced our students to wait longer to get less adequate feedback on their assignments.

Comments are closed.