When someone says “I ate some of the cake,” we automatically assume that they mean that they did not eat all of it, even though “some” can be used to mean “all.” The Barner and Stiller/Goodman/Frank readings attempted to explain this phenomenon of “scalar implicature”: how we use weak terms to imply the negation ones of stronger one that lie along the scale.
The two main theories attempting to explain implicatures are 1) Gryce's theory of implicatures, which states that you will make a statement as informative as required but no more informative than required (and thus any deviation from these conditions suggests that you actually mean something different from what your statement seems to mean at first) and 2) a linguistic/grammatical alternative theory, that you compute possible implicatures from the lexical or grammatical structure of a sentence.
This week in my SymSys class, we studied Bayesian reasoning, a potential model for explaining how people learn. It postulates that when trying to reach a conclusion about a certain thing or situation, we compare the “prior probabilities” of certain mental hypotheses to determine the correct answer; when we encounter new data, we update the probabilities of our hypotheses accordingly. It seems like understanding of scalar implicatures be explained by a Bayesian theory of learning. The Stiller reading said that its findings supported a “statistic account of scalar pragmatics” in an ad-hoc task, where people don't have a specific scale. Instead, people make conclusions about the implications of a sentence by examining the probability of certain things in the sentence in the world. This fits into a Bayesian model of determining the probability of something is something, given certain information, based on mental hypotheses.
Even in the strictly scalar case, it seems that you could argue that people understand the subtext of scalar implicatures simply because they've been exposed to them so many times in the past and updated their conclusions about the meanings of these sentences at each new exposure. Then when they hear the statement “I ate some of the cake,” they can compare the probabilities P(ate all of the cake | said “I ate some of the cake”) vs P(ate some of the cake | said “I ate some of the cake”). This would explain the fact that young people would be less likely to understand implicatures, because they hadn't been exposed to as many examples.
I think further statistical analysis of people's understanding of implicatures given their backgrounds (such as exposure to the English language, class, etc.) could be interesting . I'd also be interested in exploring implicatures across languages.
I really enjoyed learning about Bayesian reasoning and how it applies to scalar implicatures. According to Bayesian inference, the probability of a hypothesis should change as more information becomes available. Similarly, I see how children adapt their interpretation and understanding of scalar implicatures as they gain more exposure to them. Bayesian reasoning became especially relevant for me during the election. While I, along with the media and many news outlets, originally hypothesized that Hillary Clinton would win the presidency, my confidence decreased as more and more swing states turned over to Donald Trump. I finally had to give up my hypothesis when Trump surpassed 270 electoral votes, and now we must use this time to mobilize our efforts and create positive change.
ReplyDelete