Another reason for anonymous marking

A recent study provides evidence that:

 

among similarly qualified female students — those who are physically attractive earn better grades than others. For male students, there is no significant relationship between attractiveness and grades. And the results hold true whether the faculty member is a man or a woman.

The attractiveness gap disappears in online courses.  And would presumably do so in an effective anonymous marking regime as well.

 

4 thoughts on “Another reason for anonymous marking

  1. Reading the link, the effect seems very small. The higher of the two numbers represents just under 7% of a letter grade, (1.7% of a 4.0 scale, but I’ve not found that people get grades throughout all of the scale, so it seems like percentage of one letter grade increment seems more accurate). And that it doesn’t really strike me as especially meaningful… Especially in this era of failure to reproduce, I can see this small effect failing to be reproduced.

    Of course on the flip side, I’ve often been in grading positions and could not tell you which student I was grading. So one could argue that the small effect is due to adding data from courses where the TAs/graders did not know the students which would serve to lower the effect. But then you’d probably just want a study that only looks at those courses? I am no longer in a position where I can look up academic papers free of charge, so can’t say if this last complaint has any weight behind it; but it seems reasonable to infer from the article.

    In any case, my sense is that the argument for anonymous marking seems strong enough that this shouldn’t be a part of it?

  2. I haven’t read the study, so I want to be careful, but after reading the article in the Inside Higher Ed, I’m very skeptical. Indeed, it seems that, when you do a regression analysis on the data of that study using a single variable for attractiveness (on a scale of 1 to 10), the effect is not found to be significant. And, reading between the lines of that article of the Inside Higher Ed, my impression is that the authors of that study, being disappointed by that result, just tweaked the analysis to get the result they wanted. They created 3 categories of attractiveness and used those instead of the original variable on a scale of 1 to 10 to run a different regression analysis. By doing that, they were able to find a coefficient for the category of people least attractive, but not for the other categories of attractiveness, that was significant, unlike the variable used in the original regression analysis.

    If I’m right, this study would be a good example of how statistics can be used to prove what you had already decided was true, illustrating the well known principle that, if all you’re looking for is *a* significant effect, no matter how it was obtained as long as it’s in the right direction, you can almost always manipulate the data or the analysis to find one. Admittedly, I’m reading quite a lot into that Inside Higher Ed article though, so it would be nice if someone who has actually read the study could confirm or disconfirm, as the case may be, my suspicions. In any case, as someone already noted above, even if the original effect was in fact significant, it’s so small that I still think that study would not be particularly interesting. Indeed, it would illustrate something else, namely the fact that social scientists should care more about effect size and not just about significance.

Comments are closed.