In Slobin (2004), the author carefully defines and discusses the differences between “verb-framed languages” (V-languages) and “satellite-framed languages” (S-languages). I found Slobin’s point about rhetorical style, which is the way that events are analyzed and described in discourse, is largely determined by accessibility or ease of processing. I also agree with this, as it is often a writer or speaker’s aim to communicate to their readers or listeners in a non-ambiguous way, and one way to make the language unambiguous is to make it easy to process and accessible, so that readers that come from various backgrounds and cultures, or even the different contexts that the readers are reading the language in (e.g. is the reader in a hurry? in a distracting environment? etc.) can be fairly certain to have similar takeaways from the language that are not too far off from the author’s original intent or meaning when they wrote the piece. When reading about the differences between V-languages and S-languages, initially I had a thought that I would prefer V-languages for its conciseness. However, when reading about Slobin’s Chinese example of 飛出 (fei-chu) I suddenly was able to connect Slobin’s argument with my past fifteen years of learning Chinese, where I believe that it is actually due to the additional path satellite in Chinese phrases that makes Chinese so beautiful and more precise.
In the Atkins, Levin (1995) reading, I learned about an important problem for lexicographers, which is to provide a systematic, theoretical infrastructure when using dictionaries. Otherwise, we are only left with a subjective summary of the facts observed in the corpus data. The corpus data itself is also extremely important. Just like from a artificial intelligence perspective, the more data the better. Why? It doesn’t matter however many times a word is repeated; we must take every opportunity, every possible data point associated with this word to our advantage of updating our “model” because every new occurrence of the word introduces new associates and in different combinations, thereby enriching and expanding the language as a whole.
In the Haspelmath reading, we begin to look at language at an even more granular level - specifically, Haspelmath discusses a lot about morphology. I personally believed that Haspelmath did a great job at roadmapping - I was able to have a clear picture about what we were learning, where we were going with the roadmap in the reading, and how it ties back together with the rest of the study of language structure. I also found the parallelism between words and sentences with the study of syntax and morphology really interesting, especially the ability to represent morphologically complex (endocentric) words as a hierarchical tree structure, like we are able to do with sentences.
You relate lexicography to artificial intelligence when you say they both require as much data as possible. How would you continue this connection? Do you think artificial intelligence and language could work together to get more data to further the other?
ReplyDelete