A question from Philosophical Quarterly’s editor

Katherine Hawley has written to us with a good question.

I edit the Philosophical Quarterly, and we keep statistics on numbers of papers submitted, rejected and accepted each year, our turn-around time, etc.

It has crossed my mind more than once that it would be interesting to have statistics on the gender of submitting authors, and relative success rates. But then we’d have to ask authors to declare their gender to the editorial assistant. This wouldn’t affect the anonymous refereeing process, but I wonder whether it would unsettle authors. How would you feel about being asked your gender when you submitted a paper to a journal?

It’s a great question, and I’m really pleased to have the editor of such a prominent journal asking it. I’d also add that it would be good to have such data on race, and similar questions would arise.

So… what do you think?

13 thoughts on “A question from Philosophical Quarterly’s editor

  1. I’d be fine with it, as long as the reason is explained, as well as the process that will ensure anonymity. I know some journals do collect stats like this already, but as far as I know they rely on guessing gender from first name, which we all know can be pretty unreliable. I think it’s a good idea, and that it would be good to also collect stats on race.

  2. From experience in submitting funding applications (in Australia) I know the ‘statistics gathering’ is used as one of the primary methods of evaluating an application, equal with the actual application itself, despite assurances that these are only for ‘statistical purposes’.

    I think perhaps a better idea might be that since they keep statistics on numbers of papers submitted, they should have some ability to contact the submitters and at the end of the year they could request this and other information for the purposes of statistics.

    The number of people who then voluntarily respond with information on gender and so on might also say something about contentious issue of gender imbalance.

    (I’m rather fond of your blog also…)

  3. Frances, that is a deeply disturbing thought. Has this been publicly admitted? In all the applications or submitted, the data has been on a separate sheet without name attached which (they say) will be detached and processed separately. Is the process different in Australia?

  4. I kinda have to agree with Frances. Although, I hope the process isn’t exactly as it is described.

    First, it’d be best to collect the data at the end of the year or after the manuscript has been accepted or rejected. On the one hand, contacting the author after the process has been completed will relieve the journal of any claim of gender bias. On the other hand, it may even prevent some form of bias from creeping into the reviewing process – though it sounds like Katherine has tried to eliminate that.

  5. For lots of us, informing the journal of our names
    is informing them of our gender. Perhaps placed in this context, an explanation from the editors that it is important to be more accurate might mitigate the worries.

  6. Let me add: I share the worry about editor bias, so perhaps there should be a note to the effect that one’s name is seen initially only by an assistant. Then it could be added that though gender can be inferred from the name in a large number of cases, it is important to be more accurate…

  7. why not send out a questionnaire/request for email/something with the acceptance/rejection letter?

  8. lp, the journal may have a different experience, but I believe compliance rates among academics are very low. A VERY simple e-survey would get – at least at my university – a response rate well under 20%. Getting a statistically significant result usually requires begging, pleading and, if you can do it, threatening.

    We were recently all supposed to take some utterly silly tests to fulfill a state requirement and the university, foreseeing the problem, threatened to withhold any raises we might have gotten if we didn’t do them. Even then the reponse was below 100%.

  9. mmm. yes of course. yes if i ask myself “honestly, if a journal sent you a rejection letter, and then asked you to help them by filling out a questionnaire…?” and yeah. i don’t even think i would reply.

  10. I’m inclined to agree that 1) such stats would be helpful and 2) a post-review survey would be nearly useless due to response rate.

    Ok, not an interesting post, but I’m using it as an excuse to find out how I can get a little picture next to my screen name. I like jj’s especailly, but of course I would not steal it. :-)

  11. Thank you, cstars, on behalf of the tern. I’m inclined to covet the deep pink of Jender’s.

    If you sign up with wordpress – it’s free – then adding a logo comes with your profile page.

    lp, and if the referee remarks are off the wall, some people might want to send them a similarly accurate report.

  12. yes, that seems guaranteed to happen. okay, so my next brilliant idea: third party contracted to collect data? and then specify that author needs to state gender on cover page/ copy of paper/ whatever that’s sent to third party. some department could set itself up as said third party–give the job to graduate students or something (no i’m not volunteering)–and could do this for more than one journal.

  13. no, this could be done electronically. submitters would simply need to register their submission on an independently-managed web site. then the web site generates an automatic email to the journal stating the title only of the submission that’s been registered. and only after receiving this email would the journal proceed with assessing the submission. surely there’s funding out there somewhere for such a system.

Comments are closed.