Barner explains how children recognize "subtle intentional cues" yet simultaneously exhibit "striking failures in computing simple inferences" for scalar implicature. The example provided is in the meaning of 'all' versus 'some.' Yet this doesn't seem to be necessarily surprising, as computing quantifiers on a scale exactly requires a scale to be set up in the mind of the child. Clearly, this scale would take time to be refined and set up as these words and their order do not exist intrinsically in our minds. To create the "set of scalar alternatives" to substitute in to create alternative sentences, a child must be exposed to alternatives, and recognize them as alternatives. My thought is that perhaps this would be a place in which we could effectively "teach" children grammar, if we explicitly gave alternatives to add to the set like the example of: "Give me some of the cake, but not all of it". Barner's article made me reflect on how applicable set theory, I've learned in CS classes, is to linguistics. It is strange for me to think of language as a probability game, with sets and subsets, and yet this basic categorization of words is around children not yet having enough alternatives in the larger scalar set to understand that 'some' is a smaller subset than 'all'. This provides hope for a future world in which we could have an AI with set theory matched to how our minds generally learn, whatever our UG probabilities are potentially universally modeled like.
The Stiller article surprised me mainly with this sentence: "Our data rule out the simplest version of both the Gricean counterfactual theory and the linguistics alternatives theory. Instead, they point the way towards an account...integrated probabilistically with world knowledge." In my view, as I read the Barner study before the Stiller study, I viewed the linguistics alternatives theory to be exactly based on probability - but the correct alternatives are needed to in the set to be able to construct the 'right' alternatives to differentiate 'some' from 'all'. I don't think linguistic alternatives and an integrated probabilistic model are at odds, but perhaps I misunderstand the concept of linguistic alternatives since Stiller says his findings are "consistent" with Barner studies. In some ways, features are obvious - such has using more rarer features to identify someone. It is far more informative to describe something using a rarer feature of it - our memory also notes changes more than everyday occurrences. For example, we can often forget if we brushed our teeth or put the dog out because these are so everyday that our mind sometimes does not even 'note' these memories in a similar way that we may not 'note' that our friend has legs, because we assume most people have legs.
Overall, I enjoyed the two articles. It is strangely illuminating that such simple differences between children and adults hint at the models we use to define terms relative to each other and how we construct sentences with scalars.
Hi Anna,
ReplyDeleteI really liked the way you related this articles to the things we take for granted by your example of how we do not note body parts from our friends, and would further relate it to the fact that humans make many assumptions with language thinking that everybody understands what they mean because we do not think our language to be rare. Having the most proper of grandfathers, he would correct my anunciation constantly. I would then tell him that the point was for him to understand me, not him to correct me, and he would then respond that language is as good as the person who hears it, not the person who says it. I think this relates to the fact that as many assumptions as we make, it is all up for interpretation and we shouldn't take the meaning of our speech for granted.