I’m directing an independent study this summer on Experimental Philosophy.
We’re starting with the new reader by Knobe and Nichols. I’ll probably be posting about some of it soon.
So far we’re through the manifesto and the section on cross-cultural differences.
We spent most of our time talking about “Normativity and Epistemic Intuitions”
The central claim in this paper is that
a sizeable group of epistemological projects – a group which includes much of what has been done in epistemology in the analytic tradition would be seriously undermined if one or more of a cluster of empirical hypotheses turned out to be true…[and] there is now a substantial body of evidence suggesting that some of those empirical hypotheses are true
The gist of these hypotheses is that epistemic intuitions seem to be subject to cross-cultural variation. What intuitions you have about knowledge may well depend on what culture you were raised in.
I’m going to lay down the argument more precisely soon. When I do, I’ll post it here.
For now, I’ll briefly note some observations about the study that caught our attention.
-The primary comparison cultures were Western and East Asian. The number of Westerners surveyed was vastly larger than the number of East Asians surveyed. Not sure how much that matters, but it seemed initially odd to both of us. In a couple of surveys there were about 200 western respondants and only about 20 East Asian respondants.
-The p-exact values were all over the place. Only about half were under.05, and one was as high as .79.
(This was posted from my Android Phone)
There were only 20 East Asian respondants? I’ve never noticed that, but it seems like it’s incredibly premature to conclude anything about ‘traditional analytic epistemology’ based on the responses of 20 college students in Hong Kong.
Yeah. Check it out. Here is the first page from the data tables. It only has the data from the first Trutemp survey (Westerners = 189; East Asians = 24)…I’ll snap pictures of the other data tables and post them in a second.
http://www.qipit.com/public/andycullison/epistemic_intuitions_data_table_2_1
Here’s the second page from the data tables
http://www.qipit.com/public/andycullison/epistemic_intuitions_data_second_page
The first table doesn’t show up so well. It’s from the second Trutemp case (217 Westerners; 20 East Asians)
With the exception of the second survey there are never more than 24 East Asian respondants.
Other point of interest…in the second survey, there are only 8 Western respondants.
Hi Andrew,
There’s nothing untoward about either of those aspects of the statistics. We have asymmetric numbers of respondents because (and your first commentator was mistaken about this) we recruited out of philosophy classrooms at Rutgers, and there’s a heckuva lot more white kids than Asian kids in those classrooms. (Though I don’t recall why we ended up with so few W subjects in the “Community Wide Truetemp” case.) The statistical test we used, Fisher’s exact, is the one that it is our understanding is appropriate for uneven cells like that.
As for the varying range of p-values, I don’t think that there’s anything meaningful about that. The high p-values are, I believe, for comparisons for which we’re not claiming statistical significance, which is part of argument.
Cool. I’m writing a review of the book this summer for Phil Psych. I plan on reading the parts of it I haven’t already read. So I’d be up for discussion about it here.
A few quick notes:
(1) One thing that worries me about that WNS paper is that they only measured subjects’ responses using a dichotomous measure of “Really knows” or “Only believes.” This sort of forced response has come under fire by some. But, then again, I think others think it’s perfectly legit. I think it just depends on the subject matter. But sometimes I think it’s not best to force people into one of two categories. I’m not sure that this could be worked up into a full blown objection, but it’s a worry.
(2) The image links you posted aren’t working for me.
(3) I recently read the Woolfolk, Doris, and Darley paper on moral responsibility. I had some serious worries about how they were interpreting their data. Maybe we can talk about that paper here if you get to it in your independent study.
Jonathan,
Thanks for the comment…I didn’t realize that the statistical insignificance was part of the argument.
However, I do seem to remember the results of one survey with a low p-value being compared to the results of another survey with a high p-value to support claims about factors that influence East Asian intuitions. (I don’t have the book with me right now, or I’d go check that). If that were being done, we’d want statistically significant results…but I have to go back and see if that was being done.
Josh, I share your concern regarding (1)….I’ll look into (2) – thanks for the heads up on that. Regarding (3) – I’d love to discuss that paper…we just went over that section in the independent study. I’ve got a few thoughts on it that I’ll post here.
I think the bit that you’re remembering is that we found a difference between the Truetemp variants for the EA subjects, and didn’t find one for the W subjects. Precisely what seemed interesting was that, while different questions were generating statistically-significant different patterns of responses from the EAs, they weren’t doing so from the W’s. (And there wasn’t even a trend towards significance on the part of the Ws, iirc.) So that’s the contrast we’re appealing to: a change in surveys that makes a difference for the EAs, doesn’t do so for the Ws. So the lack of a difference on the W side is essential to the argument.
Btw, I meant to say in response to your first commentator: it’s the job of the statistics, not our armchair-eyeballing capacities, to determine whether a sample size was too small, given the patterns of answers we observed. It’s counterintuitive(!), but nonetheless: often 20 subjects in one cell is more than enough to detect differences that are sufficiently unlikely to have been just random noise.