When you Google "define literally", you get quite mixed results. The official definition Google spits out is what our English teachers taught us: "in a literal manner or sense; exactly." The second is more puzzling: "[informal] used for emphasis or to express strong feeling while not being literally true." This second definition uses the first definition of literally to then redefine the word to mean the opposite of the first definition. Strange.
Though this example is not a generative rule like the ones described in chapter one of the reading, it is, in my opinion, a good example in the prescriptive versus descriptive debate. The second definition arose from mass usage of the word literally to mean something other than what it really means.
In the first chapter of this book on syntax, Carnie discusses many of these dichotomies-- the goal of modeling Language vs the goal of describing Language, descriptive vs prescriptive rules, learning language vs acquiring Language. The part that I found most fascinating was Chomsky's levels of adequacy. Creating rules based entirely off of corpora seemed to me an impossible and futile task. It seemed to me that we would always be needing to inject more rules and sub-rules and exceptions to the rules to be able to successfully model Language. But this third level, explanatorily adequate, makes the problem seem less vast. I think I like this approach because it makes use of a more concrete study of how children acquire language.
In the third chapter, Carnie brings to our attention our first theory of sentence structure--the notion of constituent groups of words that work together as one unit of language. This reminded me of context-free grammars, which I studied some previous computer science theory classes. When reading this chapter, I was struck by the idea that language existed before we even understood what parts of speech were. This in and of itself I take to be an argument toward Universal Grammar. However, though parts of speech are powerful tools for modeling language, even the syntactic trees we can build from sentences may be able to generate the syntax of a sentence, we have yet to build successful parsing models.
(Posted on behalf of Hope Schroeder)
ReplyDeleteHi Charissa,
Thanks for your post. I, too, was intrigued about the definitions of levels of accuracy. Yes, the second and third levels of adequacy make the task of documenting language by observation less daunting, but doesn’t studying infinite children and dealing with the messiness of native speaker “judgments” just add to the infinite complexity of this impossible question? These are issues that I feel like Carnie left really unanswered. What do you think?
Thanks,
Hope