I appreciated the gentle introduction Carnie gave us to the world of syntax and, more broadly, Language. Chapters one and two were quite helpful in setting the tone, defining terminology and correcting a few misperceptions regarding the field of linguistics. In chapter three, I was intrigued by the concept of sentences as hierarchies that can be represented as syntactic trees, which expands itself recursively and thus lends itself to infinite possibilities. I would like to touch on a brief section Carnie addressed in chapter two, however, regarding the argument of learning versus acquisition of language, and the innateness of language.
If parts of syntax are innate, as concluded from the proof that syntax is an “unlearnable system,” then is that the boundary between us as normal human speakers, and artificial intelligence such as Siri? The logical problem of language acquisition states that “you never have enough input to be sure you have all the relevant facts.” Supposing that these assertions are true, it sounds like despite our efforts in computer science --- machine learning, natural language understanding, etc. --- we may never reach that boundary between what is taught and learned, i.e. the artificial machine, and what lies on the foundation of an innate grasp of language, i.e. human beings, simply because we shall never have enough input representative of the infinite collection of possibilities presented.
This issue that you bring up reminds me of the issue with finite state machines. Although English grammar technically follows rules, which allows for a machine to put sentences together without actually understanding the sentences that it puts together. However, the issue with this in practice is that the machine can get caught in a technically grammatically correct loop that goes onto infinity. Needless to say, an infinite state machine cannot handle things that go on into infinity. So, it is concluded that a finite state machine cannot capture the English language.
ReplyDeleteI had a similar thought when I was reading Carnie. Although I think that computers will be able to process enough data to replicate natural language very well in the future. I do wonder if this could cause problems in getting computers to originate and articulate new ideas or concepts.
ReplyDeleteYou make an interesting point on the learnability of language by computers. I think it poses an interesting challenge, because it implies one of two things. Either: 1. computers can't learn language without us building "innate" rules into the system (which admittedly is a strange statement); or, 2. the human brain is more powerful that any Turing complete machine.
ReplyDelete