The Gussenhoven and Kenstowicz
articles discussed the intricacies of phonetics and phonology. Gussenhoven
outlined how we produce sounds, specifically, the mechanisms in the human body
to make speech and how they interact with one another to make the various words
and languages. On the other hand, Kenstowicz described the ambiguities and
misrepresentations that can occur in the field of phonology as we translate
sounds from speech to page to speech again. Kenstowicz’ article compounded
Gussnehoven’s by taking the simple consonants and vowels that we can create and
discussing these sounds in terms of garnering meaning.
First, Gussenhoven introduced the
different biological speech organs. The lungs, larynx, vocal tract, pharynx,
mouth, and nasal cavity all help perform necessary functions for speech but
retain biological purposes outside of this, pertaining to eating and breathing.
Creating an air pressure difference in the appropriate location is the main
mechanism for making sounds. English is mainly combined of pulmonic (lungs) and
egressive (exhalation) speech. I noticed an emphasis on the glottis as
essential to speech production. Through rapid opening and closing, the glottis
produces the voice. A person can even make devoiced sounds, when the vocal cord
doesn’t vibrate. Certain sounds can be aspirated, where the vocal folds remain
open for a while, as in tea, pea, and key. The mouth is most important in
forming the specific sounds. Gussenhoven also discussed the three
voices—whisper, breathy, and creaky and how they are produced. Thus, these
biological speech organs come together, through air pressure, to determine
sound and speech.
In contrast to Gussenhoven,
Kenstowicz began his article with a statement about speech perception. He dealt
with allophones in phonemes. The ambiguity of communication, according to
Kenstowicz, derives from unpredictable lexical specifications,
language-particular rules, and UG default values. Essentially, Kenstowicz
asserts that the different between what is written and what we say comes from
an alphabet that isn’t precise enough to reflect what we say, and the rules and
regulations of each specific language. I find the idea that we still understand
speech even though much of it is not even coded correctly into written language
interesting.
A real world example that especially
resonated with me was that of writer
vs. rider. These two words are
written completely differently, but, when spoken aloud, sound almost identical.
We, as language users, can still understand each other even when words sound
almost identical when spoken, but are spelled completely different on a page.
The case of the letter “t” in English represents an apt example of this point.
The different t’s are not noticeable except when dissecting the language specifically.
Otherwise, they are just different phonetic sounds that make up words that we,
as native speakers, understand. The concept that we can understand each other
without a written lexicon that reflects the precision of our speech raises the
question: should we be writing phonetic specifications into our languages?
I was in fact thinking about something very similar while reading Kenstowicz, Emily! I find your phrase 'we, as native speakers, understand' very important. Having spoken to a number of people in Europe over summer who learnt English as a second language revealed that most judged it rather challenging to understand the variations that come with different accents in the English language. The prevalence of Hollywood had conditioned their ears to the American accent (especially the Californian), which was distinctly different from the British English that many had been taught when they were little. I guess this isn't too wild an idea when placed against the backdrop of something a Russian once said to me: 'English is both the easiest and most difficult language I have ever learnt'.
ReplyDelete