Skip to content

MTS Speaker Isabel Papadimitriou- What can we learn from language models?

MTS is the department’s Mind, Technology, and Society speaker series. It is hosted by a different faculty member each semester. Founded by a generous gift from Professors Robert Glushko and Pamela Samuelson, MTS brings researchers and industry professionals from across the globe to present a variety of interdisciplinary work in cognitive science. See our UCMerced CogSci youtube channel for videos of past MTS talks! 

CIS graduate students, faculty, and staff, and all who are interested are invited! Members of other departments at UC Merced as well as the general public are encouraged to attend. (Note: current CIS Ph.D. students are required to attend MTS each semester in residence, to fulfill their COGS 250 course requirement).

Dr. Isabel Papadimitriou's talk, "What can we learn from language models?" will be from 2-3:30pm in SSM 104

Abstract: This talk will examine how our understanding of human language can benefit from the recent surprising successes of language models in modelling human language production. Language models provide cognitive scientists with an unprecedented empirical tool to expand and test our theoretical hypotheses about language. I will go over two main methodologies for taking advantage of language models as an empirical tool. Firstly, language model interpretability: examining language model internals as functional theories for how linguistic information can be represented and used. Secondly, using model training as an empirical testbed, examining what kinds of environments make statistical language learning possible or harder. Both methodologies showcase the importance of developing empirical paradigms that narrow the gap between computational methods and linguistic concerns in order to make language models able to help us expand the horizons of our hypothesis space.

Bio: Isabel Papadimitriou is an assistant professor of linguistics at the University of British Columbia. She is interested in analyzing how large language models learn and represent abstract structural systems, and in how experiments on language models can help enrich the hypothesis space around what makes the learning and representation of language possible. Before UBC, she was a Kempner Fellow at the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard, and got her PhD in the Stanford NLP group. 

For more information or to sign up for email announcements, please contact the talk series organizer: cis-mts-lead@lists.ucmerced.edu.