This weeks
selected readings served as an introduction to implicatures and, more
specifically, scalar implicatures, phenomena whereby weaker linguistic terms by
virtue of their being chosen over stronger alternatives of the same scale (“same”
versus “all) elicit a more precise semantic meaning. Variables thought to
influence ability in comprehending these implications include the following: processing
memory (contested/debunked), development of scales, and contextual information among
others.
Because we
are interested in how this ability develops, we look to its manifestation in
children and find that it does not inherently exist. It is learned to some
extent. Children have to build the proper scalar relationships, similar to
building a vocabulary, before they can apply this impliciative ability, which
seems to be more based in logical reasoning than in Language.
From what I
understand, and what is shown in the study using contextual implicatures in its
methodology, the ability to correctly interpret these implications is innate.
It is the ability’s linguistic manifestation that needs to be developed.
What I
would like to know is how this ability presents itself across different languages
and in different communities. In particular, what are the distributions of
scale density (number of logical alternatives defining a qualitatively equivalent
range) among differently parameterized languages? Additionally, it would be
interesting to see to what extent exactly choosing any scalar quantifier over
another, paralleling the effect seen in quality rarity, which it is suggested
influences how much information a description expresses, is present in these
linguistic scales across languages. Could saying “some” versus “all” evoke a
different experimental distribution (given the same experimental procedures
described in the readings) than the equivalent words in other languages?
One
question that this phenomenon evokes for me is such: What is the best way to
interpret these implications, and where is the overlap between this
interpretation and how they are naturally understood?
One
distinction which both readings made clear was that the trends in the data were
established for complicit or cooperating speakers. This label is important. It
only accounts for speakers who are not aware of and have no intention to
manipulate a listener’s interpretation of the implication in their speech, from
scalar implicatures or otherwise. It would be interesting to see if there exist
any legal precedents for the interpretation of these technically ambiguous,
superficially explicit statements. Where does being a “non-cooperating” speaker
fit in with the law?
On one
final note, although the data collected was statistically significant in some
cases, what would lead to such a split in interpretations across adults given
the task of rejecting a certain description; that is, why do so many accept “some”
in cases where the majority requires “all?”
I find your question about non-cooperating speakers quite interesting! It seems to me that this still requires some measure of pragmatic inference (namely, reasoning from context that the speaker is being non-cooperative and then determining the actual meaning of their speech). In regard to the Stiller paper, given the knowledge of a speaker's un-cooperativeness, it might then be that a person would simply determine the meaning they would have derived from a speaker's words with the assumption that they were cooperative and subsequently would eliminate this meaning from their list of possibilities.
ReplyDelete