I was surprised by this “statistics” about dialectica. As you can see from our actual statistics our normal answering rate is 2 months, not 6.6 as said by you poll. Same with the other figures
doing and publishing polls on so small a sample ( 19 people) is no good statistics, and actually misleading.
I am well aware of the unreliability of a poll like this. It’s also my assumption that philosophers (the people who would look carefully at these things) are likely to be aware not to put much credence into a survey when the sample number is so low. That’s why I make it so explicitly clear on the front page of the the results summary was the sample size is.
Given that I have some confidence that philosophers have some competence to know which results from this survey to take seriously, it seemed all things considered best to go forward with a system like this that gave some data on the activity of journals. Some data seems better than none at all, and the hope is that as more and more people fill out surveys we will have a more accurate reflection of (at minimum) useful comparison data. Note that it seems like the person who compiled the statistics on your sheet would agree. You give statistics about gender submissions based on the first name of the submitter, which is as you acknowledge highly unreliable. Presumably the justification for that is that, despite inherent flaws with that method, it’s better to have some data on this rather than none at all. Right?
Can you point me in the direction of your stats concerning review times? I didn’t see that on the stats sheet you linked to.
A fundamental problem is self-selection. Perhaps one is more eager to fill out the survey if one has a paper accepted (that would account for the staggering 33 %). Also, perhaps papers that get accepted have an average longer review time (I don’t know how these things correlate). My worry is that self-selection bias doesn’t disappear if the number of submitted questionnaires increases. It would be great if journals would publish these data more readily.
I was surprised by this “statistics” about dialectica. As you can see from our actual statistics our normal answering rate is 2 months, not 6.6 as said by you poll. Same with the other figures
doing and publishing polls on so small a sample ( 19 people) is no good statistics, and actually misleading.
http://www.philosophie.ch/dialectica/dialectica_statistics.pdf
Hi Pascal,
I am well aware of the unreliability of a poll like this. It’s also my assumption that philosophers (the people who would look carefully at these things) are likely to be aware not to put much credence into a survey when the sample number is so low. That’s why I make it so explicitly clear on the front page of the the results summary was the sample size is.
Given that I have some confidence that philosophers have some competence to know which results from this survey to take seriously, it seemed all things considered best to go forward with a system like this that gave some data on the activity of journals. Some data seems better than none at all, and the hope is that as more and more people fill out surveys we will have a more accurate reflection of (at minimum) useful comparison data. Note that it seems like the person who compiled the statistics on your sheet would agree. You give statistics about gender submissions based on the first name of the submitter, which is as you acknowledge highly unreliable. Presumably the justification for that is that, despite inherent flaws with that method, it’s better to have some data on this rather than none at all. Right?
Can you point me in the direction of your stats concerning review times? I didn’t see that on the stats sheet you linked to.
Hi Pascal,
A fundamental problem is self-selection. Perhaps one is more eager to fill out the survey if one has a paper accepted (that would account for the staggering 33 %). Also, perhaps papers that get accepted have an average longer review time (I don’t know how these things correlate). My worry is that self-selection bias doesn’t disappear if the number of submitted questionnaires increases. It would be great if journals would publish these data more readily.