This week’s readings examined the idea of scalar implicature
and why children have difficulty developing scales whereas adults do not.
In Accessing the Unsaid, Barner explores reasons for the failure of
children to derive scalar implicatures in conversation, postulating that their inability
to process differences between words like “some” and “only some” depends on
whether the question asked is context-dependent. In the study, Barner asked
children questions using context-independent words like “some” and
context-dependent words like “the dog and the cat” and found that the
context-dependent words changed responses significantly, whereas “only” did not
change responses significantly at all. By focusing on children, the study shows
that accessing scalar alternatives requires additional learning.
Similarly,
Stiller’s paper compares the counterfactual and linguistic alternatives
hypotheses that explain how people derive scalar implicatures. By running 3
experiments with context-dependent scale, no scale, and scale with items of
differing rarity, Stiller shows that linguistic and social factors must be
integrated with world knowledge (like knowing that rarer features are more
informative) to make lexical inferences possible.
It’s interesting
that children respond so well to context-specific uses of only (“only the cat
and the dog”) but not to more general terms (“only some”) when the
interpretations are highly similar. The fact that adults judge these statements
the same way while children do not means that, at some point, children learn to
generalize from specific cases like “dog and cat” to mean “some.” This is
similar to Stiller’s finding that people infer that more rare features are more
informative, a conclusion based on shared statistical knowledge. How much
exposure to contextual examples is required for a generalized pattern to be
learned? At what point to people learn the idea that rareness indicates
informative-ness? At what point does an understanding of “some” and its scalar
alternatives emerge?
The idea of
statistical world knowledge accumulating after a certain age to allow people to
make certain inferences is a common theme in learning, not just in linguistics.
Even the simple learning task of labeling items (furry four-legged mammal as
“cat”) after being exposed to just a tiny subset of all the possibilities of
what form that label can take is one that humans do particularly well. In
thinking about machine learning, the accumulative learning tasks that humans
struggle with, like differentiating between context-specific scales and lexical
scales, might be more informative than the ones that we do easily. Maybe understanding
how humans learn to generalize context-dependent information into lexical
information can provide insight into learning from repeated exposure and the
amount of exposure needed for learning to emerge, whether in humans or
machines.
I also am intrigued by "how much exposure" we need to learn a pattern. Outside of the linguistic world, children still learn patterns through interaction with parents. I think it takes only one or two mistakes for children to understand the difference. If the mother asks the child to bring her some flower pots and the child brings all flower pots, she will tell him "some" is different from "all." Only from one instance, the child will understand what is some. However, I am interested to know more about the refinement of the degree of "some" as in bringing 2-3 pots or 50-60 pots.
ReplyDelete