I write a lot about implicit bias, and about how we should all be taking steps to mitigate it. I’m also Head of Department. So when I was placed in the position of hiring for two permanent posts, I decided to take the opportunity to put in place what seemed to me, based on what I know about implicit bias, to be the best practises. It went remarkably well, so I thought I’d report on what we did, and how and why we did it. And also on some of the difficulties, because it wasn’t QUITE as smooth as it could have been.
1. What we had candidates send: Anonymised CV and writing sample, with identifying information on a detachable cover sheet. In keeping with widespread UK practise, we only asked for names of referees at this stage, not references.
a. Detachable cover sheet only actually makes sense if these things are going to be printed out, and if they’re not being submitted electronically. I’m not sure why I asked for it, but I wouldn’t do it again. For electronic documents, removing it is a tedious bit of editing. Just ask for anonymised CV and writing sample.
b. Candidates weren’t always sure what was meant by ‘anonymised’ or ‘identifying information’. Some worried they should leave off their publications, or place of PhD, or employment. Much better to put in brief clarification of what to leave in. [What we actually wanted left off was just name and email.]
c. The e-recruitment system sticks candidates names into the file names of every file downloaded by those on the committee, adding *another* bit of anonymisation to do. Unless you have a system which doesn’t do this, you’ll need a bit of administrative help retitling all of these. (And we really should advocate for systems that don’t do this!)
It IS vital to have a bit of administrative help– someone who can check to make sure that everything actually is anonymised, who can also write assign numbers to the candidates and keep a list of name-number pairings.
2. How we long-listed: We long-listed on the basis of CV alone, to get down to 15-30 candidates. Our focus was primarily on meeting area needs and publication record.
3. After long-listing, we read anonymised writing samples. We also sent away for references. This decision was the subject of debate. I favoured waiting until we’d shortlisted, because of well-documented biases in reference-writing, and also because of national differences (e.g. US references are MUCH more glowing than UK ones). However, some wanted references to be used in shortlisting. Our compromise was to have references sent to a special email account, to which committee members would only be given access a couple of days before the shortlisting meeting. At that point, they were also given access to the name-number pairings.
4. How we shortlisted: Shortlisting was based on full information: CV, writing sample and references. Fascinatingly, though, even those who had advocated the use of references in shortlisting found them to be not of much interest after close examination of CV and writing sample. All felt that use of references had in the past been a merely apparently useful short-cut, which probably served to short-circuit proper consideration of more significant information. We also found that in many cases we had failed to recognise the written work of those we actually knew, so the anonymity had worked remarkably well.
You might wonder why we didn’t anonymise references. One reason is that it’s a lot of work– need to eliminate every occurrence of name or gendered pronoun. Another is that if a reference is anonymised you can’t try to take into account the tendency for referees to e.g. describe women as ‘hard-working’ and men as ‘brilliant’.
5. How we hired: Our process is a long one by UK standards and a short one by US standards. The main events are job talk (1.5 hours, including discussion) and interview, though there are also a couple of meals. The most important bias-fighting measure I took at this stage was in the discussion of each candidate post-interview. I did not allow overall gestalt evaluations or comparative evaluations until the very end. Instead, we agreed a list of topics we would discuss about each candidate in turn. I listed these on a whiteboard to make sure they got covered in every case. We carefully distinguished such things as written work, job talk, and discussion period so as not to give any of these undue weight. (There’s a good case to be made that written work is a better indication of research ability than job talk under immensely stressful conditions, including in many cases stereotype threat. Yet nonetheless it’s all too easy to focus more on job talk.) Only after each candidate was discussed in detail did we turn to comparative judgements. This lead to much richer and more useful discussion than I’d experienced before in such circumstances (and I’ve lost count of the number of hiring committees I’ve been on!). In both cases, we had very strong fields, and therefore extremely difficult decisions to make. But we all felt that this process helped enormously in making these decisions.