It's becoming more and more clear to me why linguistics is a core component of symbolic systems here at Stanford--with studies that focus on implicature and Grice's maxims it's much easier to see the connections between linguistics and philosophy and psychology.
The study conducted by Barner, Brooks and Bale tackled an issue with implicature and development: among children, it is very difficult to determine why some implicatures are sound while others make no sense. By testing different contexts with the key qualifying word only, Barner et al. were able to deduce that knowledge of alternative scenarios is pivotal in their ability to implicate correctly. Whether this is due to limited life experience or smaller working memory capacity or a different measure altogether should be the subject of further study.
Stiller, Goodman and Frank also looked at scalar implicature, focusing on a specific element that may help us better understand the results of the previous study. Across 3 experiments, Stiller et al. came to the conclusion that "children succeeded at the "scales" condition by relying...on the real-world knowledge that possessing a feature (e.g. a top hat) is less common than not possessing that feature" (6).
This being said, how can this translate into computer inference? A computer doesn't have the luxury of cognitive development and acquiring world-knowledge as children do--a computer's scalar inferential ability on day one will be the same after a year, or two years, or ten. The results of the studies present deeper issues still, as the importance of context and alternative examples for proper scalar inference seem to be difficult to program. Is there a more practical way to create computers that distinguish between "all" and "some" other than hard-coding every case? Are we, as humans, hard-coded ourselves, which is why it takes us until early adulthood to recognize the error of sentences such as "some birthday cards have text"?
Hi! I happened to bring up a very similar issue to you. These readings really illuminated a major problem that no one is discussion for machines. If we want them to truly be “human”-like, how are they going to have better cognitive skills than a child? How do we make sure that computers can understand not only the differences between “all” and “some”, but also the many other things depicted through tone, context, etc.. It would seem that the future of machines will rely heavily on linguistics and understanding tiny things such as an “all” vs “some” distinction that will become crucial.
ReplyDelete