After reading the three articles “The social weight of spoken words,” “Effects of phonetically-cued talker variation on semantic encoding,” and “Voice-specific effects in semantic association,” I found that this week’s reading formed a fascinating bridge from last week’s focus on sociolinguistics. Drawing connections between these three readings, I discovered that there is much more that goes into our perception of social information than just words. The readings brought up accents, priming, similarity in speakers, and speaking patterns, among other things, as factors in how we perceive spoken information.
What I found particularly fascinating was the idea from “The social weight of spoken words” that people discriminate because of social cues of language, and that this kind of bias is created early on while the language is being processed. There is a lot of information about an individual’s identity and characteristics being drawn from their spoken language, and the readings suggested that we encode divergent semantic associations for these speakers.The reference to Sumner and Kataoka’s paper that showed that language spoken in a prestigious accent is better remembered than language spoken in a stigmatized accent was especially jarring. I’ve noticed this kind of bias, not just for examples like the ones given in the article (Southern Standard British English versus New York City), but for accents that feel a lot closer to home—in my case, a Chinese accent. In California, I’ve noticed this type of discrimination when my family members with accents call businesses over the phone—they often get a rude, impatient response. However, I don’t have an accent, and when I step in and take over the conversation, people on the other end have actually said things like “Oh, finally someone I can understand”—even though there was nothing incomprehensible about my relative’s accented English! From these kinds of experiences, it seems that people are much less biased and more willing to pay attention to someone with similar speaking characteristics and patterns.
Although the paper “Voice-specific effects in semantic association” mentioned that listeners “shift their perception of phoneme boundaries depend on audio or visual cues to… speaker dialect,” the ideas of bias from “The social weight of spoken words” still exists to a large extent. This is an intriguing connection to our earlier discussion of sociolinguistics—these readings just served to further illustrate the biases and social cues that we inherit when listening to spoken language, and the diversity of signals accompanying the lone words that influence what we perceive and understand. This brings up several questions: is it possible for someone to isolate words from the surrounding language cues, and how would this be achieved? In an interview, would a less qualified candidate with a British accent outperform a more qualified candidate with a New York City accent? Similar to Rickford’s argument for the social responsibility of linguists studying AAVE, can we make a parallel case for the responsibility of linguists to educate people about these types of cues and resulting biases?
I also wonder how we could separate the surrounding language cues, just short of going to text! Or if we are unable to do so, which speech patterns weigh more heavily to lead to a discriminatory response - or is that dependent on the listener? Moreover, do some patterns have the ability to offset each other? Like the quote from Podesva's piece on Condoleezza Rice, that she may "have to be twice as good, given [she's] black." Do we adjust our biases and become more neutral the more we interact with people? Is there a time frame for how long it takes to discard these biases in regards to a specific individual we interact with? Do we ever discard them? What speech patterns or actions would help us to do so?
ReplyDelete