Both of this week’s readings discussed
“scalar implicatures” and children’s and adults’ abilities to reason about them. Scalar implicature is the concept that the use
of a weaker term on a scale as opposed to a stronger one signifies that the
stronger one is untrue, even though logically it could be true. For example, if someone said that “most of
the tables have four legs”, one would assume that not all the tables have four
legs (even though if all had four legs, so would most).
The first paper focused on figuring out the
explanation for why young children have trouble excluding the stronger form of
the scale when they hear a weaker statement, a task adults have no trouble with. It explored theories such as
counterfactuality and linguistic alternatives. One concept presented that
appealed to me was the pragmatism of the fact that we tend to describe things
by their rarest features.
The
experiment of the Barner paper focused on children’s knowledge of scalar
alternatives and the inclusion of the word “only” on their performance on
scalar implicature tasks. One very
helpful thing this paper did was outline theoretical steps necessary to “derive
a scalar implicature”. A concept that
came up repeatedly in the paper was the difference between numerical scalars
like 1-10 and context-independent scalars like “some” or “all”. Considering
that children tend to learn scalar implicature for numbers sooner, I am led to
believe that “real-world” context is one of the driving factors of children’s
difficulty with scalar implicature. Five of something will always be the same
amount, but “some” varies with context.
The idea
of scalar implicature reinforces one of the main ideas that I’ve learned and we’ve
discussed in the course: a lot of information that we communicate through
language isn’t present directly in the words we say. While this concept does
vary from last week’s discussion and readings about how information about
speakers is carried through language, they reinforce the same idea. Before this course, I wondered why a computer
that could speak like a human hadn’t been created. It seems now that every week
I see more complexities in language. Do
you all agree a computer would have to be able to reason about the world and
the behaviors of others in order to speak correctly?
Another
trend I’ve noticed emerging over the past few weeks is that speakers tend to
try to save time and energy in many ways in their speech, and sometimes
sacrifice perfect clarity of meaning in doing so. This shows up in how we cut sounds out of words
(although sometimes a word with every word enounced how it is spelled sounds
wrong) or how we don’t say “some but not all of the tables have four legs.” It even shows up in dialects; some dialects
like AAVE involve the elimination of some words or sounds from speech, which
can make it more difficult for others to understand. Are these shortcuts really worth the time we
save?
Hey Ethan!
ReplyDeleteI have also put a lot of thought into computer speech and the complexities of language. To answer your first question about whether a computer would need to reason about the world and behaviors of others, I think this is true. I think there are layers of complexity in our subconscious inferences, interpretations, and emotional attachments that affect how we reason and understand, and I don't know how a computer would be able to simulate the exact processes that we go through. I bet that's why we often get frustrated at automated message responses and the google maps voice (or at least I do) - language is just so much more complex than we often realize. I'm glad you brought that up and I would definitely be interested to take a linguistics/computation class to learn more about that.