The first three chapters of Andrew Carnie’s Syntax: A Generative Introduction lay
the foundations for understanding language not just as geocultural phenomena,
but as Language, the psychological ability of humans to produce and understand
particular languages. He explains the importance of Generative Grammar as a
cognitive concept, and proceeds to describe the different ways we can view
grammar as a Language construct.
What I found most interesting in this reading and therefore
want to focus on, however, is the idea of syntactic trees for representing
constituency. I find this is a really cool concept because with it we can more
accurately define the meaning of a sentence given its language’s syntactic
structure. This relates directly to the contents of Chapter 2, in which Carnie
explains the parts of speech that make up a sentence. I find parts of speech
fascinating because of their value to a sentence as positional elements. With
morphological and syntactic distribution in mind, it is understandable how
extraordinarily complex putting a sensical sentence together is.
Consider the example sentence, “The man killed the king with
the knife.” Without drawing tree diagrams, it is still possible to see how this
sentence could be read in at least two ways: the man uses the knife to kill the
king, or the king is holding the knife when he is killed. This room for
ambiguity is fascinating, and the ability to dispel this ambiguity with a
syntactic tree is even more interesting. This leads to my thought: with so many
syntactic possibilities, do any AIs such as Siri or Google use syntactic trees
to better “comprehend” inputs? If they do not, it would be very interesting to
whip up a low-level AI that is capable of rapidly determining all possible
trees for a given well-formed input sentence. Which interpretation would it
choose as the best one? With this in mind, I turn to us mere mortals. Given a
sentence such as “The man killed the king with the knife,” what interpretation
do we think of immediately? Does current mental state affect the initial “tree”
that we form in our minds when we interpret a sentence? When you think of a
royal murder, do you immediately consider the murder weapon (the knife), or a
probably cause for the man to kill the king (the king was threatening with a
knife)?
It is this element of the psyche that I wonder most about –
since there can be so many interpretations of sentences, to what degree does
our own personality play into making that choice? As mentioned before, Siri and
Google must in some way make educated guesses as to what sentences mean, and
that guess is generally driven by hard-coded logic. Humans are much more
malleable, so they have the freedom to choose and defend a less logical
sentence meaning. From this, I query whether our comprehension of Language and
the innate pieces that Carnie suggests we have is rooted just as much in psychological
status as syntactic phenomena.
Interesting observations. It seems that this sort of 'most likely scenario' reasoning comes into play in a few other ways when it comes to speech perception. If you hear "I am going to give the ?og a bath.", context would lead you to fill in "dog" instead of "log", "bog", etc. Similarly, new words are often learned by parsing a sentence in the most likely way and guessing at what the word would mean. If you hear "the man cut his bagel with a ", you'd at least be able to learn that is used to cut bagels, rather than that the bagel somehow possesses .
ReplyDeleteIt seems that the mind's M.O. is considering possible scenarios and filling in details based on past results.