Both readings this week deal with the ability (read: inability) of children to comprehend scalar implicatures, or the being able to imply information about a claim given the fact that the speaker did not use stronger language. For example, if I said "I've read some of the books on this shelf," I have implied that I haven't read all of the books – because, if I had, I would have said that! This was a concept that is both highly obvious and one I had never really considered before, so I very much enjoyed both of these readings.
Adults are great at making these inferences, but young children are not – both the Barner et al and the Stiller et al readings ask the question about why this is, and arrive at separate but compatible conclusions. Barner concludes "that children's knowledge of scalar alternatives places a significant constraint on their ability to computer scalar implicatures" – that is, when hearing the word "some," children are not immediately able to bring other scalar words to mind (such as "all") and therefore are not able to connect the two concepts (Barner 93).
Meanwhile, Stiller finally arrives at the point that, like all of pragmatics, scalar implicatures "operate over our knowledge about the world, our knowledge of language, and our knowledge of other people" (Stiller 6). All humans are better at implicature when context is present – when they can bring their knowledge of the real world into the computation. Because adults have more knowledge of the real world, they are therefore better at such computations.
It seems likely to me that both Barner and Stiller are right here, that children's difficulty with parsing scalar implicatures has to do with both their limited world knowledge as well as their limited scalar lexicon. What fascinates me most about this idea, however, is the difficulties it might present for digital computers as natural language processing becomes more advanced. A child will grow up and get world experience and grow their linguistic abilities. Sure, we can program a scalar lexicon into a computer, but – how does a computer get knowledge of the world, knowledge that it can then use in casual conversation, to perform pragmatic computations like scalar implicature? Because, to me, that doesn't seem like something we can code, and therefore, might be a barrier in finding AI whose speech is fully indistinguishable to that of humans.
I would be fascinated to see how the experiments run by Barner and Stiller would turn out if run on a computer. How does the most advanced text-based AI in the world do on tests of scalar implicature?
I think the questions you posed on how digital computers can understand natural language processes are very interesting. You say at one point that you don't think social implicature cannot be coded. Do you only mean the social implicature that Stiller mentions? At least to me, the social implicature that Barner talks about is a just a rule that follows logically from the semantic inferences that "some" and "all" have. I envision some if statement could parse through a potential sentence and then figure out whether or not social implicature could apply.
ReplyDeleteAs for the issue of applying world knowledge to social implicatures, I think that is a more difficult task. I think the biggest difficult in such a task would be applying the correct relevant knowledge to whatever sentence is being uttered at a particular moment. I think the IBM super computer Watson and its performance on Jeopardy suggests that this social implicature may be doable. As you may know, the questions on Jeopardy are highly idiosyncratic. They utilize puns and word play quite often, and categories and questions often rely on knowledge that crosses different domains. However, despite such difficulties, Watson was able to decipher the questions and adequately beat out veteran Jeopardy champions. This leads me to believe that it would be possible to apply relevant world knowledge to social implicature.
Lastly, I just said "I need some help" to Siri and she brought up a list of ways that she could help me. When I said, "I need all help," she told me that she didn't understand what I was saying. The probabilistic response from Siri is probably a result of amassed data collection and analysis on speech patterns, and given a large enough corpus of language use data, perhaps a lot of the implicit mechanisms of social implicature may be covered by some AI.