Dealing with gender/topic biases in teaching evaluations

A reader solicits practical strategies for facilitating the sensible institutional interpretation of student evaluations of teaching, given the empirically well-founded worry (as was noted on this blog recently) that such evaluations express a substantial bias against women instructors.

Hello wonderful community of feminist philosophers, I’m hoping that you can help me with a problem that is not just mine but is one that so many of us share. This is the problem of teaching evaluations. Teaching evaluations as a method of assessing teaching leave much to be desired. However, their use becomes even more problematic or worrisome when (as in my case) they are used as one of three main criteria for annual departmental evaluations and promotion.

There is good evidence to show that anonymous course/teaching evaluations are biased against women and a number of other underrepresented groups. Most recently, there is this study. But in addition to evaluations being generally biased against women, I’m facing the additional issue: namely, in all of my courses I include a good deal of feminist and critical race theory. Having recently read my course evaluations, I noticed that a good number of my students reacted negatively to this material. For example, there were many comments that spoke to the “problem” of so much feminist philosophy, about how I’m trying to “indoctrinate them,” and about how if they didn’t simply agree with my (feminist) positions then I would give them low grades. Of course, all of these claims are false but nonetheless I am worried about their presence. It seems that on the basis of the content of my courses (in addition to the gender bias), my evaluations are importantly lower than those of others (and for reasons that have nothing to do with my actual teaching abilities).

So I’m wondering whether and how people in other departments have dealt with this problem. I’m pretty certain that my institution (big, public university) is committed to keeping them, so abolition is not on the table at this point. Still, I wonder if there is any way to take into account these known biases so that certain groups of people are not systematically disadvantaged. Have any departments tried other methods of assessing teaching either instead of or in addition to the required ones? Even though my university probably isn’t going to stop using teaching evaluations any time soon, it is possible that my department might be persuaded to use a different method of assessing teaching when it comes to departmental annual merit reviews (or at the very least, supplementing the university required teaching evaluations with some other methods).

Looking forward to hearing your thoughts on this matter.

4 thoughts on “Dealing with gender/topic biases in teaching evaluations

  1. My department has encouraged me to ask a senior colleague to observe my teaching, and to write a letter for my file. This can work if you have a trustworthy senior colleague.

  2. I agree with the first comment, but also wonder if there is a forum (biobib or whatever) where you could contextualize your scores, including quotations about the course content as a bias factor as well as any about gender specificially. At the least, you might have that discussion with your department chair or a trusted mentor or other senior colleague who could take the it to the chair.

  3. I think you raise a very important question that hasn’t yet been well addressed by the philosophy profession. There are two issues that need different strategies. First are strategies for fixing the univ policy. Second are strategies for this year’s promotion/review.

    For a start on dept/univ policies: First, it is a philosophy problem, so do try to take a dept approach. You might try to get at least all the women in the dept to refuse to participate in further evaluations. And explain why. Others you might try to enlist include the dept logicians, since formal logic courses often get less glowing evaluations, esp. if they are distribution requirements that students can’t avoid. Approach each of your gentler, friendlier colleagues individually about your interest in addressing the problem. You will probably find colleagues are more likely to lend their support if you are able to raise the matter not so much as a point about your own promotion/annual review, but as your concern about the advancement of a woman more junior to you. It’s safer and smoother to go in asking for help to protect or to advance someone else. It helps to diffuse the competitive spirit and is more likely to appear ‘reasonable’. (Don’t suppose that your being disadvantaged worries them. Assume that it doesn’t. Understand that most people are uncomfortable dealing with women advocating for themselves, so try to bring real systemic change by advocating for someone else.) Set a realistic goal: can you aim to change dept or univ policy within, say, 4 or 5 years?

    Some things NOT to do: Don’t put copies of studies about the problems with anonymous evaluations in your promotion/annual review materials. Don’t put them in all your colleagues’ mailboxes for them to read on their own, as a way to start the conversation. Don’t include web links to such studies in your promo/review materials. Try instead to guide the discussion, one-on-one, as a serious and concerned colleague. Don’t get baited into debate with anyone about the merits of anonymous student evaluations.

    You want to avoid anyone ever ‘reacting’ quickly with hostility. Quick responses might not be helpful. Strategise and try to delay any objections until long after people have made up their minds to support you in changing the dept/univ policies.

    Aim to convince your dept to act together as a dept to bring the issue to the univ for policy change?

    I have not participated in any anonymous student teaching evaluations in 10 years. I had good reason not to. In a big public research univ, I was the only female in dept, the only logician in the dept, I had tenure, and I had a boss who was trying to have me fired. One year the boss wrote personally to each of my students, asking them to contact him in the first instance if they had any kinds of complaints about me. Simply, he made it okay to find fault with me, and overnight students started to find fault with me, in droves. So I had to stop participating in anonymous student evaluations. I never saw a reason to go back to them. I found other ways to provide evidence about me teaching:

    Save any student emails, save colleagues’ emails, anonymise them, and include them, numbered and with internal references in the ‘teaching section’ of your promo applications. That lets you control the available info and lets you weave the fan letters into your evidence about how students engage with you and how they value what you provide.

    If your situation is dire, ask a senior referee who has seen you giving conf talks or teaching or mentoring students to comment specifically on how effectively you communicate with students. But that is at best secondary evidence and you need direct evidence.

    A different tack: Talk to the techies at your uni about the technology available to assist student learning. Find out about the tech and use it. And then in your promo materials say how you sought the new technology out, learned it, and have successfully incorporated it in your teaching,… to thus and such effect. Charts and tables comparing before and after can be made to look like the kind of ‘serious, measurable data’ that committees like.

    I used to get great comments from male logic students, but less enthusiastic comments from women logic students. I asked all the students to try to identify the parts of the class they liked most and disliked most. The results followed gendered lines! So I tweaked the way I presented the material, to let the students do more of whatever they enjoyed. Eg, Validity testing or Translations from English into formal logic. It built their confidence. They all did better and learned more logic. The students response was great. Then I wrote all about it as evidence of my teaching excellence! It helps to save the negative comments so that you can include them to show how you successfully addressed the underlying issues.

    You can find all sorts of ways to put good evidence in front of a promo/review committee. After a while it actually can get fun figuring ways to out-play the anonymous student evaluations game! And getting the promotion helps to defeat the bias against women in philosophy, so stick at it.

  4. I heartily second the idea of asking trusted senior colleagues to observe your teaching and write letters. You can contact faculty in other departments if you think those in your own area cannot provide an unbiased account of your teaching, since tenure decisions or other promotion or contract reviews usually take place at the college level and beyond, in addition to the department.

    I would consider soliciting more detailed analyses of your teaching from upper-level courses that tend to enroll more students with a strong interest or major in the subject. Often they can provide a more detailed and meaningful assessment than students who have chosen your course merely to fulfill requirements.

    It can be helpful to devise your own survey questions and ask someone else to administer the survey to your students at mid-term (and perhaps again at the end of term, to detect any evidence of improvement). Some universities have a faculty development or teaching support office that can collect the data for you. Colleagues in the Education college may have more experience with this sort of thing and might be willing to help. This allows you to ask about specific aspects of your teaching that are not reflected on the evaluation form. General questions about teaching are open to biases about whether women are organized, funny, or have sophisticated expertise. Specific questions about how quickly you returned assignments, how detailed your feedback on assignments was, or how often you helped students outside of class might yield more accurate responses.

    My teaching evaluations have changed over the years, and while it is possible that I too have changed, in at least some respects I’m a far better teacher than I was when my career began. Evaluations do not reflect this in any obvious way. Students used to rate me as more organized, in particular, and if anything I am far better organized now than I used to be. Perhaps my efforts are less obvious or students expect more hand-holding nowadays, but bias is another possible explanation for the change. Anecdotal experiment reveals that students give me roughly the same evaluations after the first three weeks of teaching that they give at the end of the term, suggesting that the factors on which one is evaluated might be rather different from what otherwise might be supposed. Other anecdotal experiment reveals better evaluations in courses for which I dressed in suits instead of more casual clothing (and this went along with higher ratings on organization). Actual statistical analysis of a survey of several courses revealed that students also falsely attributed several specific political beliefs to me before they had any idea what my actual beliefs about any topic were. Given the things I have learned about teaching evaluations over the years I find it astonishing that educated scholars continue to rely on them so heavily for critical information about tenure and promotion.

Comments are closed.