I enjoyed learning about how children use semantic transparency, productivity, and the principle of conventionality to form new words. The reading got me thinking about computational models of language acquisition. In my SymSys class, I've been reading about how neural networks model language acquisition. From what I've read in that class, it seems that of the two competing models of the brain - a symbolic system of rules and representations versus a neural network of nodes and weighted connections - the neural network model is more accurate. So throughout this week's reading, I was wondering about how a neural network could incorporate these observed principles of language acquisition.
One thing that I was confused about was the importance of negative feedback in learning language. It would seem that children rely on correction from parents or teachers to know if their word constructions are correct - indeed, this was the first example given in the reading. However, I was wondering how frequently this actually happens when children are acquiring language, because another reading in my SymSys class seemed to suggest that children don't actually often receive negative feedback. But somehow are still very effective at learning new concepts. If kids don't receive negative feedback, how can they accurately define new concepts but not define them overly broadly?
I suppose that one solution is extremely accurate mental rules to help them make the right guess immediately. But if we use a neural network model, where do rules of semantic transparency, productivity, and conventionality come from? A symbolic systems model might suggest that a child stores a ranked list of mental representations of some of the most common word-formation devices. A neural network model would suggest that the weighted connections between neurons store this information. I'm curious about how much experience and negative feedback is necessary to train this network - is it a little or a lot? Are negative examples heeded more than positive ones? Does a neural network then allow a small amount of negative feedback to go “much farther” in training the model than a symbolic systems model?
I also don't understand how general principles like conventionality and productivity relate to neural networks. Are these just our observed, human-created rules for emergent behaviors from the network? This seems likely. I bet that there is some explanation relating to how close neurons in a network are to each other, but I don't understand the physiological details well enough. It would be interesting to compare how well machines programmed to learn languages did in word formation compared to humans.
I really enjoyed this week's reading because it related so directly to the reading I'd been doing about learning in my other class.
I like the way you brought up the Neural Network to explain the phenomena stated in the this weeks reading. I would also like to add that neural networks are able to handle exceptions in rules, like using the word "mechanic" instead of coining the term "fixer" or "fix-man". So I think you have brought a very interesting point to the discussion table.
ReplyDelete