Sunday, October 9, 2016

Intuition and Abstraction

The Gussenhoven and Kenstowicz readings both focused on the details of speech production in humans. The Gussenhoven focused more on the mechanics of language production in one's anatomy, specifically, during the production of an utterance, how a single change in position or timing of the movement of one part of the vocal tract can drastically shift the sound so that said utterance has a different meaning entirely. Gussenhoven detailed and analyzed what seemed like all possible permutations of sound. However, the Kenstowicz article focused less on mechanics of sound production and more on the "collective phonetic illusions" that are allophones, or subtle variations of the same phoneme. Kenstowicz states that native speakers often judge similar sounds based on the same phoneme to be identical, even though the sounds are actually produced differently from the throat. However, native speakers don't consciously keep track of these mechanical variations in sound: context is an easier and more efficient method of determining exactly which words are being said and what meanings they take on, than syntactic rules. Thus, Kenstowicz argues, because humans aren't consciously aware of all the rules and representations in formal language theory, these rules are just a hypothesis for that which is really happening in the human mind.

Throughout reading both of these papers, my mind drifted to the concept of abstraction in computer science, where the programmer only cares about the functionality of a program or method (what the program/method is capable of doing), not the way in which the functionality is implemented "behind the scenes". Computer programs are built upon layers of abstraction, where a specific functionality is built, then put in a "black box" -- used by other parts of the program without those other parts having to worry about what exactly is inside the "box", as long as the box functions as promised. Much as the program abstracts away the details of some functionality contained within it, so do humans abstract away the functionality of our "vocal tracts" when we speak. Under normal circumstances, when we speak our preferred language (and are not being linguists and formally analyzing everything), we do not pay much attention to the opening or closing of our throats, or the exact position of our tongues when uttering a consonant, or whether the t in "stem" really is different from the t in "ten". Furthermore, not only do we abstract away from the mechanics, but also there exist mechanics to language production which it is extremely difficult to consciously control. Although we are able to easily control the pitch of our voices, it is very difficult to directly manipulate our vocal folds into opening/closing; and although we can easily control whether or not we are breathing in from our mouth, it is not easy to directly manipulate the velum into dropping. When we speak our primary language, we abstract away from these aspects and focus instead on the speech itself, on what we intend to say rather than the mechanical aspects of how to say it. This abstraction contributes to these "phonetic illusions", in which sounds which are actually produced differently sound very similar/the same because we are not readily consciously aware of how they were produced.

However, when we do not speak our primary language, our brains pay much closer attention to the way the sounds are mechanically being produced, and whether a vowel sound that we are uttering is "exactly right". For instance, when I am speaking Spanish or Korean (neither of which is a "first" language), I find myself, instead of abstracting them away, paying extra attention to the mechanics of my throat, as well as the subtleties of the consonant and vowel sounds in each respective language. (In Korean, these distinctions are especially crucial, for ch and j, b and p, sound even more similar than they do in English.)

No comments:

Post a Comment