Wednesday, November 30, 2016

Is Adding the Suffix "-er" Innate or Learned?

The reading this week Learning to coin agent and instrument nouns by Clark & Hecht focused on how children acquire the conventional adult devices for coining new agent and instrument nouns in English. Specifically, the paper discussed how children learned how to use the suffix –er. The study showed that the youngest children were inconsistent with their use of –er, slightly older children were more consistent with their use of –er, and the eldest children used –er consistently for both agents and instruments. The paper proposed three principles that interact to account for this phenomenon. The idea the use of –er becomes more consistent over time recalls another idea that we studied previously and an idea that I have become enthralled by: is language truly innate? Carnie notes “children still acquire language in the face of complete lack of instruction.” One conversation that Clark & Hecht include in their paper is the following:

Child: What’s that called?
Mother: A typewriter.
Child: No, you’re the typewriter; that’s a typewrite.

True to Carnie’s sentiment, this child will undoubtedly come to terms with the idea that the physical object is called a "typewriter" rather than a “typewrite.” However, if children are learning how to fill gaps in their vocabulary by constructing new word forms to carry meaning for which conventional forms have not yet been learned or happen not to exist through their real-world experiences, linguistic structure, and social reasoning, shouldn’t we conclude that language is learned? To me, the idea that children can better understand the suffix –er over time means that at least some aspects of language are learned. It all seems like a problem of defining what it means for language to be innate. I would argue that even if children have the innate ability to reason through the suffix –er given context, this does not mean we can conclude that language is innate. Rather this is an argument that supports their innate ability to reason, not to use language. Thus, these findings by Clark & Hecht are evidence that language is learned rather than innate.

Wednesday, November 23, 2016

Quick Coining Thoughts

When first reading through Clark and Hecht, I thought of my roommate who conducts similar card studies with three year olds at a school in San Francisco. The night before she went to conduct the study for the first time, she went through it with me. It was somewhat comical, having my best friend pretend I was a three year old, and asking me basic questions. However, I remember telling her that I am surprised young kids are so cooperative. I could see how they felt intimidated, judged, or influenced by the experimenter. The questions were so simple that I even questioned myself and wondered what exactly they were looking for. Regarding this experiment, I could see how kids could feel discouraged or swayed, especially in the groups that thought of suppletive words (like 'scissors' for something that cuts) and received feedback based on that. How much of the children's responses correspond to them figuring out what the experimenter wants along with their level of language? There have been numerous studies about humans - particularly babies - social skills with just recognizing facial expressions. How does that progress with language?
In addition to the psychological studies that demonstrate social recognition, I also think of the studies that demonstrate humans submission to authority (like Zimbardo and Milgram's experiments). This reading stated that children "relinquish their own innovations." I am curious to how age corresponds to the development or access of the "language center" but also how quickly children would adjust their word use based on who is correcting or not correcting their languages. How much does children's perception of authority influences what is considered proper language acquisition?

Monday, November 14, 2016

The Surprising Complexity of Scalar Implicature

This week’s papers both focused on language development in children, specifically regarding the concept of ‘scalar implicature.’  Scalar implicature refers to our ability to infer additional meaning from words like ‘none’, ‘some’, and ‘all’ that fall along a scale.  Adults interpret ‘weaker’ scaling words as negating ‘stronger’ ones – saying ‘some people have glasses’ implies that not all people have glasses.  Young children do not make this same inference, despite their skill at making linguistic inferences in a wide variety of other contexts.  Why do children develop scalar implicature later than other, similar skills?  Is the delay a result of lack of lexical knowledge, or does it mean a more fundamental logical mechanism is slow to develop? 

Stiller used three experiments to explore the source of delayed scalar implicature.  He used an implicature task with a non-standard scale to assess whether young children can use implicatures before they understand the subtext communicated by words like ‘some.’  The three- and four-year-old participants performed better on these simplified tasks, which relied the same kind of logical reasoning as scalar implicature.  However, their performance was still not comparable to that of adults.  Stiller further examined the importance of logical structure and context by adding additional features to the picture sequences used in the experiment, obscuring the ‘scale’ on which they were compared.  Adult participants no longer inferred scale from the image series, implying that scalar implicature depends on overall context rather than individual features. His third experiment confirms the power of context over implicature - we are more likely to construct scalar implicatures for rare features than for common features.  Stiller's results give us insight into the underlying logic of scalar implicature and the impact of feature rarity.  In the process, Stiller describes some interesting patterns that we unconsciously use to communicate more effectively.  For example, rarer features convey more useful information than common features, and people are more likely to notice and mention them.  

In his paper Accessing the unsaid: The role of scalar alternatives in children’s pragmatic inference, David Barner discusses several previously proposed sources of the relatively late development of scalar inference – it could be due to limitations on working memory, difficulty understanding and incorporating context, or limited ability to come up with scalar alternatives (like not thinking of ‘all’ as a logically stronger alternative to ‘some’).  Like Stiller, Barner’s results show the importance of context in scalar inference.  In his experiments, the presence of the word ‘only’ did not significantly modify children’s acceptance or denial of statements with context-independent scales (like ‘only some …’ vs ‘some …’), whereas it made a huge difference when questions with specific contextual alternatives were asked (like ‘the cat and dog …’ vs ‘only the cat and dog …’).  This aligns with Stiller’s results that the logical underpinnings of scalar inference are not causing the late development of this feature.  Neither are other factors like working memory to blame.  The main issue appears to be unfamiliarity with the implied context of extremely general scaling words like ‘some’ and ‘all.’

One thing I find amusing is that children are interpreting these statements more literally and logically than adults, who seem to be leaning on social convention (or general contexts and conventions) to convey a more specific meaning than they express.  This subtlety makes our communication more efficient, but seems to also come with drawbacks and a need for context that make learning slightly slower.  

All of my cake

Both the articles assigned this week covered the same subject, from different perspectives: children "misunderstanding" the word "some". That is, both studies focused on how children performed with scalar implication. If I'd eaten all of the cake, would children in general agree with a statement that I'd eaten some of it?

Barner et. al. shows that they do, in contrast with adults, and concludes that it's because they have not had enough linguistic experience to access a set of scalar alternatives so quickly and intuitively. While their study doesn't disprove the claim that it's, because their brains, still not fully developed, do not have the processing power to do so, it weakens that claim. Stiller et. al., on the other hand, shows that children do have the processing power in an experiment where a task requiring the same processing (but with different lexical items) is completed satisfactorily. Their findings also support the counterfactual theory that a fact with more information than is given is false - if I say my friend has glasses, both adults and children would guess that it is someone with only glasses rather than someone with both glasses and a top hat. At the same time, they refute the linguistic alternatives theory that Barner et. al. assumes to an extent - that this phenomenon occurs because people refer to a list of scalar alternatives when deciding whether an utterance with scalar implication is true.

This leaves the question open - why are children unable to come to the same conclusion as adults about "some of the cake" as opposed to "all of the cake", if they clearly have the processing power and neither do adults access a list of scalar alternatives? While I have no background in either pragmatics or developmental psychology and am unqualified to answer this conclusively, there seems to still be a lack of experience that prevents the children from interpreting "some" as automatically excluding "all". As Stiller et. al. mentioned in their conclusion, the children performed on an equal level as the adults with scalar implication if by chance different lexical terms were used, and mentioned the role of world knowledge in the decision. Perhaps this could also be applied to Barner et. al.'s results. In a slightly different twist of their own conclusion, the children might simply not have an exclusive definition of "some" and "all" - not because they haven't been explicitly taught that set of scales, but because they have not seen as many circumstances where a differentiation would be necessary, and - as the authors mentioned - they have not been taught the difference.

Interestingly enough, this particular scenario appears to be slightly useless with regards to application to machine learning. Contrary to humans, computers are usually programmed to find all the occurrences of something, and usually will only show some of those occurrences due to limitations in the interface (for example, Google will not show you all of the billions of webpages in a search on one page.) Furthermore, if absolutely necessary, it is very easy to hard-code the differences using logic gates, and it may often be more practical to do so rather than to let the machine learning algorithm acquire the difference the hard way. If for some reason one were to do that, however, it's conceivable that different results will show up, as, according to the conclusion synthesized from the two papers, machine learning algorithms will likely not have more trouble learning from a circumstance where the linguistic alternatives theory is plausible than one where it is not.

Black Lives Matter vs. All Lives Matter

Barner and Stiller both discuss the concept of “scalar implicature,” an implicature where a weaker, more ambiguous term is used to prescribe quantity. For example, when someone says “some of the students went to the mall,” it implies that not all students went to the mall, even though it could be true logically that all students went to mall. Both Barner and Stiller compare understanding of scalar implicature among children and adults. They found that children often have a harder time computing scalar implicatures, whereas adults are much more adept in inferring scale. This perhaps is because our ability to understand such implicatures is refined as our experiential knowledge expands.


Reading about scalar implicature reminded me of an article that I’d read about the “Black Lives Matter” movement and the subsequent “All Lives Matter” response. The article named two primary interpretations of the slogan: 

1) black lives matter [as much as others]
2) black lives matter [more than others]

Depending on your background, you might interpret the slogan as one or the other. However, the point is that one who interprets the slogan as the latter would feel more inclined to respond with the statement “all lives matter” because they feel that "black lives matter" discounts the lives of other races. However, they miss the silent scalar implicature—that “Black Lives Matter” is more additive (implying that black lives matter just as much as others) rather than exhaustive (implying that black lives matter more than others)—instead reciprocating with "All Lives Matter." Unfortunately, this statement then comes off as dismissive to supporters of "Black Lives Matter," culminating with misunderstanding on both sides.

barner and stiller

The Barner article was talking about how words that vaguely qualify the amount of something, like "some" or "many", are on a scalar scale, even though they do not represent a finite number. Children, however, are often unable to identify this as they do not qualify these words on a scale in the same way that they qualify numeric values. 

I think that this is interesting because it shows that people can use imprecise language to talk about precise things.  Our intentions go beyond the language that we use, and social context allows for our meanings to shine through despite the imprecision of our language.

The Stiller (et al) article used three experiments to show that differences help children learn. Children come to realize that having certain features is more strange than not having them. For example, it is more noteworthy to have a monocle than to not have a monocle. In this way, children are able to learn the nuances of scalar implicature.

Theses readings made me think about the ways in which we harness language. We are often semantically vague in our meanings -- "I drank some of the wine" could, in fact, mean that that person drank all of the wine since "some" doesn't actually negate "all". However, when we hear this statement we assume that there is still wine left. This makes me think about lies of omission. One could harness this linguistic loophole and still technically be truthful. This, however, begins to get into ethics, which I'm not even going to try to comment on.

Sunday, November 13, 2016

Oh hey there, Grice

"Neither the word only nor the quantifier was emphasized by the experimenter’s prosody." - Barner, p.92

As a kid, I often exploited the implicatures of words like "some" to get away with deceiving without outright lying. I was scrupulous and somewhat obsessive about not lying, but I apparently I had no problem with cutting some corners. However, I realized that I could only make it believable if I delivered the line with the right prosody. That is, I had to fight the urge to highlight my conniving, to show off to the people I wished to remain inconspicuous to. (By people, I mainly mean my mom and dad.)

It was really interesting to me that the Barner study, the Stiller study and the studies they cite find a lot of linguistic and semantic sophistication in young children. I see this linguistic capacity in my little siblings and cousins, and I have vivid memories about my own early reasoning. I took home that kids have to learn some scalars; it's truly amazing to see a bit of our linguistic capacity come to be. Still, the fact that "inferential mechanisms underlying implicature are present in young children" (Stiller) is awe-inspiring to me. Why and how this is the case remains to me an open question that I'd like to explore more deeply.
I took Linguist 130A winter of my freshman year, so it was delightful to see this kind of material again. Perhaps the most important idea in pragmatics is that “speakers’ intended meanings go beyond the literal meaning of their utterances.”

In this week’s reading material there was a focus on children and their struggles with scalar implicatures. (Scalar implicatures refers to using weak terms to imply the negation of stronger ones that lie along the same “scale.”) More precisely, failing to have an adult-like response with some and all. It makes sense to me that children have an easier time with numerals, which have lexically strengthened, exact, meanings. In facts, I’d argue that this example is representative of learning in general as a young kid. I remember having the thought while in high school that I was constantly relearning concepts that were once taught to me in black and white.  When you’re a young kid learning is more about memorization than it is about thinking critically. Anyway, as stated in the Barner paper, it seems that implicatures require “additional processes” that you flex more as an adult.

It’s fascinating to me that children accept a weaker version of “or.” As I started taking Computer Science classes, it initially felt wrong to write code that logically followed a weaker “or” and I had to remind myself that a stronger “or” isn’t the only “or.”(Maybe in this way kids are smarter!) It’s also fascinating that kids do prefer stronger, more informative descriptions of scenes, but aren’t able to compute a scalar implicature just yet. They’re also able to assign strengthened interpretations when alternatives are provided contextually. It seems in the end that kids do know that “some” and “all” represent different set relations, but need additional learning.

Lastly, I hope we get to discuss further the idea of a “cooperative speaker.” In these readings I’ve learned that one rule for being a cooperative speaker is making your contribution “as informative as is required, and do not make your contribution more informative than required.” I’d like to review more rules! Another rule I can think of is to give factual responses.