Sunday, November 13, 2016

A Pragmatic Possibility for Machine Learning?


Could a machine ever be hard-coded with the functional linguistic and pragmatic abilities to compute implicit meaning behind literal sentences in language?
This question is one that very much interests me, cognitive neuroscientists, and those at the cutting edge of innovation in artificial intelligence alike. However, if this feat were somehow possible, particularly with regards to scalar implicature, it would mean that the computational abilities to compute these implicatures are immutable and innate in humans, and thus could be programmed in an AI from its very creation. Therefore, while reading both research papers this week, I wondered whether implicative abilities truly were all innate or were learned and developed through interactions within society that developed our pragmatic and heuristic processes.
Both Barner’s and Stiller’s research shed light on this question and on the developmental origins of particular inferential mechanisms involved in implicature by analyzing when and why young children tend to make mistakes in scalar inferences. Barner echoes Papafrogou & Tantalou’s hypothesis “that children lack knowledge of which lexical items belong on common scales”, and hence struggle to understand which "words are relevant alternatives in a given context” (3 Barner). In order to test this theory, Barner’s research uses a scalar implication test with sixty 4-year olds and finds that while children can easily make scalar implicatures with specific numerical values, they struggle with inferences based off of lexical quantifiers. Barner theorizes that this may be because of children’s “failure to represent lexical items as members of psychological scales”, compared to a child’s acquired knowledge of a count list, memorized shortly after birth. In simpler terms, it is much easier for a child to understand that the number “1” is “not 2” due to the fact they have memorized a specific count list, but it is harder for them to understand that the quantifier “some” is “not all” without prior acquired knowledge of the scalar alternatives of the word “some”.
Stiller’s research helps to back Barner’s theory, yet also helps to delve into the roots of these “scales” in pragmatic inference. Using a stimuli-response ad-hoc scalar implicature test, Stiller is able to conclude that heuristic capabilities paired with real-world knowledge of the rarity of certain scales (eg. The idea that a top hat is not very common in the real world) are involved in pragmatic inferences such as scalar implications. This research contributes to the idea that “Pragmatic computations operate over our knowledge about the world, our knowledge of language, and our knowledge of other people” (6 Stiller) which helps to answer my key question and is extremely influential to the field of machine learning. The idea that we must use information generated from external contexts – about our society as a whole, about the other words used in our society, and about our surroundings – to compute certain ad-hoc scalar computations suggests that scalar implicature cannot be hard-wired or hard-coded into an AI. Instead of giving machines a limited, innate breadth of knowledge from the very creation of their central processing systems, we now know that we must create an effective machine simulation of “the child brain” that can learn from its environment in order to develop scalar implicature capabilities. This logic reinforces Alan Turing’s iconic statement, “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?” 
While this analysis of research has answered one of my questions and allowed me to gain a more nuanced understanding of the capabilities of Symbolic Systems, it also raises many other questions in my mind that I hope to answer through further research in the realm of pragmatics. Which pragmatic capabilities, if any, involved in ad-hoc scalar implicature are innate? Why and how are these pragmatic capabilities wired in our brains, and could they be encoded and implemented in machines? Will machines ever be capable of ad-hoc scalar implicature? As research on machine learning deepens in the future, I am optimistic that I may very soon find out.

No comments:

Post a Comment