Seminar in Computational Linguistics

  • Datum: –14.30
  • Plats: Engelska parken 9-3042 and zoom (
  • Föreläsare: Shalom Lappin
  • Kontaktperson: Artur Kulmizev
  • Föreläsning

Towards Non-Algebraic Models of Linguistic Representation

For many decades linguists have employed grammar formalisms and model theories to encode the syntactic and semantic properties of natural language. These have frequently been incorporated into symbolic NLP systems, and logic driven AI frameworks. The revolution in deep learning has replaced these formal methods with distributed neural networks, most recently with attention based transformers. These are generally regarded as engineering devices that perform many learning and processing tasks more efficiently than rule based machine learning systems. In Lappin (2021) I consider whether deep learning networks can also be taken as alternative models of linguistic representation which can contribute insight into the ways in which humans acquire knowledge of language. In this talk I will present several of the arguments given there in support of the conclusion that the results of deep learning are indeed relevant to our understanding of the cognitive foundations of natural language. These results suggest that non-algebraic models should be taken seriously as possible alternatives to the traditional formal frameworks within which linguistic knowledge has been encoded.