The Barner paper explores the phenomenon in which children fail to compute scalar implicatures and argues that this behavior is due to a lack of knowledge of scalar alternatives. For example, when children are presented with the word “some” in “Are some of the animals sleeping?” they are unable to derive scalar alternatives, such as “all.” Barner then proposes that children should be presented with learning beyond rote memorization as with numerals for lexical concepts such as scalar alternatives. By grouping syntactically corresponding and semantically related lexical items within the same scale relative to one another, children can then learn and access these scales to draw scalar implicatures.
The Stiller paper expands on this topic by firstly, agreeing with Barner’s conclusions; Stiller’s experiments support that children possess the inferential mechanisms for implicature yet struggle to derive alternatives in scale. Stiller also draws implications of their work to pragmatic computations beyond a linguistic context, referencing the dependencies both adults and children have on so-called “shared knowledge” to make inferences. This knowledge provides statistical information that better informs what computation results are more likely to be correct.
This week’s readings tie in nicely with our current study in SYMSYS 1 of learning, deductive vs inductive reasoning, and Bayesian inference. Children are able to learn these scalar alternatives when juxtaposed within the same context, which is the same method in how inductive reasoning can help in concept learning in children --- pointing out examples of what is a “dog” and a “cat” despite not telling a child all the infinitely possible things that are not “dogs” is sufficient for teaching a child what a “dog” is. Stiller’s paper references Bayesian inference, in which one draws from presented evidence and then updates prior hypotheses each time to reflect either an agreement or disagreement with the observed outcome. These are remarkable tools for learning, and language acquisition and concept learning are extremely enlightening instances for studying how they fit in with human learning and how they work.
Love the connection to Bayesian inference! Interesting that learning concepts of words that link to objects like "dog" and "cat" seem to come more easily than words like "some" that link to more abstract concepts. I wonder what accounts for these differences in speed of Bayesian learning throughout childhood development?
ReplyDelete