In their paper Building
on a corpus: A linguistic and lexicographical look at some near-synonyms,
Atkins and Levin explore usage of seven similar English verbs: quake, quiver,
shake, shiver, shudder, tremble, and vibrate.
In standard usage, five of these verbs are intransitive. However, analysis of large corpuses reveals
that these intransitive verbs have a few rare transitive usages, blurring the
lines between transitive and intransitive verbs. Analyzing large numbers of sentences also
uncovers patterns in how words are used – for example, Atkins and Levin found some
shake synonyms take body part nouns as their subjects far more often than others,
a surprising result considering how similar their meanings are. Levin and Atkins’ article also highlights
some of the drawbacks of corpuses that we discussed in class. For example, unlike a native speaker, we
cannot ask a corpus if some new sentence structure or word usage is valid. No matter how large a corpus is, there are
still questions it cannot answer.
Chapters two and five of Haspelmath’s text contain detailed
explanations of the internal structure of words – their underlying lexemes, the
difference between inflection and derivation, the vast range of affixes that alter
their meanings, and much more. After
reading chapter two, I’d like to learn more about how these affixes develop. For example, Haspelmath discusses exceptional
affixes with non-abstract meanings, like 'bio'.
What led to this exceptional class of affixes? Could they be derived from words that were, at
some point in the past, standalone nouns?
It was also fascinating to learn how lexemes can be
systematically combined to form new words; I didn’t realize that there were so
many different compound word patterns, each with their own internal logic. In particular, I was surprised that across languages,
compound words are formed by combining lexeme stems rather than the inflected
versions. It’s not something I would
have predicted to be consistent between languages, especially since the
frequency of different types of compound constructions varies so much between
languages. Humans’ unconscious perception
of compound words is also more complex than it appears. For example, when I read compound words, I don’t
often think about the meaning of the individual morphemes, even if they are
obvious (like in babysit). It’s impressive
that speakers can view these compound words as units, paying no attention to
their substructure, yet simultaneously know how to correctly inflect them so
that only the syntactic head is modified (e.g. childcare vs. childrencare,
which seems reasonable but is not used).
Finally, in The Many
Ways to Search For A Frog, Dan Slobin discusses the results of linguistic
experiments that asked native speakers of different languages to construct
stories based on sets of images. Slobin
and others studied the patterns in word usage, and specifically found a
difference between verb-framed language (V-languages) and satellite-framed
languages (S-languages) – very broadly speaking, speakers of V-languages tended
to use specific verbs to add detail to the actions and describe manner of
movement, whereas speakers of S-languages produced descriptions with more
elaborate end-state and location descriptions.
Slobin also found other languages that were intermediate between these
two categories.
Altogether, these readings highlight the endlessly flexibility
of language, from words combining according to regular rules, to intransitive
verbs taking on objects, to describing the same situation with different
emphasis depending on language.
No comments:
Post a Comment