Saturday, November 5, 2016

Social weighting is key

Sumner's article titled "The social weight of spoken words" introduces the concept that speech includes social, and not just purely linguistic, functions. Our social representations are also not just activated after we listen to someone talk, which is what the Lev-Ari and Keysar experiment presupposes. Instead, these social ratings occur early on during the process of understanding and comprehension. This in turn influences how we encode speech events and also retain information. Social weighting means that voice cues can trigger social biases and affect how we attend to the information that someone is saying. Social weighting plays an arguably more important role than frequency; we can understand things we rarely hear, and we can remember "standard" forms of words better even if we hear them less often than typical and more common forms.

Sumner and Kataoka test this assertion that frequency can't portray the whole story of speech perception and memory. The first half of the experiment showed that both General American and British English accents resulted in greater priming than the New York accent. If one only looked at frequency that the listeners heard each of these accents, it wouldn't make sense why the GA and BE produced recognition equivalence. However, social weighting helps to explain why listeners attend to speech events with these accents and voice cues differently than they do to the NYC accent, thus creating different semantic encodings. The second half of the experiment showed that listeners falsely recalled more lures for the NYC accents rather than for GA or BE. Again, frequency alone can't explain why listeners recalled spoken words equally from both GA and BE speakers.

I found the distinction between verbatim versus gist memories very interesting in this paper as well. It is absolutely true that I have false or incomplete memories of a lecture in school whenever I am half-heartedly listening. This makes sense because I am encoding the general idea of what was said and not fully trying to remember or internalize exactly what was said. What struck me was that dense, yet weakly encoded episodes can cause recognition equivalence to sparse, yet strongly encoded episodes. That explains how listeners reacted to GA and BE in similar ways, even though frequency of exposure varied!

The King and Sumner article further tested these ideas that voice cues like gender and age - and not just different accents like the previous experiment - also affect word recognition. We also know that voice characteristics affect our expectations about what content a speech event may contain and also what words a speaker may say next. This article showed that interpretation of words also varies according to voice cues. The differences in top associates were much larger across different speakers compared to within a single speaker. The second experiment also showed that listeners responded in ways that depended on that particular speaker's association strengths. All these papers showed that we internalize phonetic variation and social weighting thus helps to explain our encoding of linguistic events.

No comments:

Post a Comment