It seems that they might.
Matti Eklund, one of the editors of Philosophical Review, emailed me asking for some more detailed number crunching on the Journal Surveys section with respect to Comment Quality Ratings. He wanted to test the hypothesis that philosophers who have papers rejected tend to rate the quality of comments lower. So I did that. The numbers seem to confirm this.
The chart below compares quality of comment ratings for some of the mainstream general journals. The column labeled “All” is the average from all respondents. The column labeled “Initial Reject” averages just the ratings from respondents who received an initial verdict of “Reject” and the column labeled “Initial Accept” averages just the ratings from the respondents who received an initial verdict of “Accept” (Keep in mind the sample sizes for that latter column are low – these journals rarely give out an initial verdict of “Accept”).
What’s interesting is that, while it seems that rejected philosophers tend to rate quality comment lower, there is still marked variance among these journals among rejected philosophers. Also interesting is that Australian Journal of Philosophy and Philosophical Quarterly are both above 3 among rejected philosophers.
Also interesting is the major shift for Philosophical Review. They seemed to be at the bottom in terms of comment quality, but if you look at their average among rejected philosophers and you factor in PR’s very low acceptance rate…you’ll see that looking at the average among ALL respondents for PR would be misleading.
Very interesting, Andy. How about this: why not simply display the average comment ratings for the rejected papers for each journal, rather than display the average among all respondents? The typical user who is interested in the comments rating will probably be most interested in getting some idea of how good the comments will be in the event his or her paper is rejected, no? Surely it’s relatively less important to get good comments on an accepted paper.
Dustin,
I think that’s a great idea, and I think I’m going to do that.
I’ve been useful!
If the choice is between on the one hand displaying average comment ratings for rejected papers and on the other hand displaying average among all respondents, then I think the former is preferable, as Dustin says.
However, I think it’s also important to keep track of what authors of accepted papers, or of papers that have received a verdict of revise & resubmit, have to say. Certainly the quality of comments accompanying a verdict of revise & resubmit is an important matter. And sometimes a journal’s referees and editors can do an important job providing comments improving an accepted paper.
Why not just provide a few different averages? That way more relevant information is available.
The fact that those who suffer rejection rank referee comments lower may indicate spite or envy or some such sentiment. It could, however, also be true. I’ve been worried for some time that editors send papers they want or expect to be accepted to more competent referees.
Matti,
That would be very easy to do. I’ll go that route.
Hey Andy,
Have you controlled for those respondents that submit a response to the “quality of referee comments” question but admit that they didn’t receive any comments?
Here’s why I ask. The survey uses a 5-point Likert scale asking respondents about the quality of comments they received. 1 = very poor/ very unhelpful and 5 = very good / very helpful. Although the survey asks respondents not to answer this question if they did not receive comments, I believe some might still respond. To the respondent’s mind, receiving no comments scores a 1 on the Likert scale because receiving no comments is “very unhelpful.” PR, Mind, JPhil, PPR, and Nous seem especially more likely to reject without providing the author any substantive comment because of the volume of mss they receive.
Relatedly, there may be respondents that believe receiving an email from the editor indicating that their manuscript has been rejected is a comment. So, respondents may believe they have received comments and respond very negatively (perhaps) to the quality of comments question. (There’s no way for you to control for this other than rephrasing the question or explaining what you mean by “comments;” instead of “comments”, might I suggest using “referee reports” or “referee reviews?”)
Finally, although I don’t believe that Mark is wrong that some authors feel “spiteful” after receiving a rejection of their manuscript, I believe there might be a strong correlation between negative view of the quality of comments and the length of time it took for the author to receive initial decision on the manuscript. JPhil, which is notorious for taking an unconscionably long time to review mss, seems to rank lowest among the other prominent journals (BTB, I’m not judging JPhil here; I’m just pointing out a fact that I’ve heard from others who have submitted to JPhil). Perhaps you can test for this correlation too.
These are just some thoughts on the matter.
Best,
Joe