In investigating the origins of scalar implicatures Stiller 2011 conducted 3 studies whose findings support a statistical linguistic account of scalar pragmatics. In reference to their third experiment that manipulates informativeness of features by showing figures beforehand and showed how rarity of afeature is correlated with informativeness they argue, "a theory of implicature must integrate finegrained statistical information about such shared context." This is in contrast to the counterfactual and simple linguistic alternatives theory.
The conclusion made by Stiller reminds me of a theory of learning we've discussed in SymSys 1. It says that learning and decision making is all probabilistic, that we're always making new judgements and decisions based on past evidence. It's a convincing argument to me that statistical analysis plays a large, unconscious role in almost everything we do, including the generation of scalar alternatives. And I wonder how many more fields of human behavior related science are beginning to incorporate an account of probability.
Totally agreed with the interest in probabilistic theories of learning. I think it's no surprise that much of AI being developed these days also relies strongly on probability, namely machine learning. Anyone who's taken CS109 at Stanford remembers that that probability class ends in an assignment about machine learning – you write an algorithm that makes probabilistic assignments to data based on input the algorithm has already seen. As AI is developed further, it seems it will be helpful to rely on the probabilistic theories of mind such as the one you described.
ReplyDelete