The Atlantic has a new story about attrition in PhD programs. In our discussions here, we focus a lot on how specific issues in philosophy – sexual harassment, gender and racial bias, lack of accessibility, etc – can be particularly hard for particular types of students. But, the importance of those issues notwithstanding, grad school can be hard for anyone and everyone. And there’s an increasing amount of evidence that grad students are especially vulnerable to mental health problems. The Atlantic article focuses on some potential explanations for general grad student unhappiness and attrition, especially the high levels of uncertainty about the future, lack of mentoring, and relentless criticism that grad students often encounter.
This conversation is a complicated one to have, especially in fields like philosophy. The Atlantic article seems to start from the premise that attrition is, in general, a bad thing – if students have the ability to finish a PhD but aren’t, then their universities are failing them. But I doubt it’s that simple. Grad school isn’t for everyone, and it’s probably a good thing that some people get into grad programs, realize the work isn’t making them happy, and leave (even if they have the academic ability to finish). It’s hard to predict what grad school will be like from the outside, and there’s nothing in principle wrong with giving it a try and realizing it’s not your thing. I worry we sometimes pathologize leaving as failure – as though there’s something intrinsically better about finishing a PhD if you can, even if you don’t want to – and that in doing so we make it harder for students who simply aren’t enjoying their work to move on to something else. That being said, I don’t doubt that we’re definitely failing our grad students along multiple dimensions – including those discussed in The Atlantic – and that as a result many who could be very happy and successful in our PhD programs aren’t.
I imagine many of our readers were shocked by a much-hyped study which purported to show lack of racial bias in police shootings. You can read a very important critique of it here.
First, there is extensive evidence (including in the datasets Fryer considers) of large racial disparities in who gets stopped by police, even controlling for differences in crime rates (perhaps especially under policies like New York City’s “Stop-and-Frisk”). Because of this, the “hit rate”—or the percent of times a stop ends with a confirmation of wrong-doing—is often higher for whites than blacks. Even if police pulled the trigger without “bias,” this disparity in stops would produce vastly unequal death rates.
This means that when we start the analysis by looking at encounters with police, we have already washed away some of the relevant racial bias. The unique data on police-citizen encounters Fryer relies on from Houston allows him in effect to “control” for the propensity to come into contact with the police in the first place. This is likely part of the reason he finds no evidence of bias in lethal interactions, while others have shown substantial racial disparities. For example, in a 2015 Plos One article, Cody T. Ross estimates that black Americans’ probability of being shot by the police is 3 times the rate for whites—and the disparity goes up to more than 20 in some counties. Similar community-level disparities that are unexplained by differences in crime rates emerge from a recent report from the Center for Policing Equity.
Anti-black racism in America is much closer than some of us might realize to British colonialism in India. The very same sorts of arguments that were used to justify British colonialism in India can now be found within certain conservative narratives in America and are being used to justify anti-Black racism. We have suffered the dire consequences of a racist British occupation. We cannot allow our fellow Black citizens to suffer in kind from American racism.