University of Southern California
3-4:30 p.m. Monday, March 14, 2016
Statistical learning of auditory patterns as trajectories through a perceptually defined similarity space
Many studies have shown that human adults, infants, and indeed members of various other species can absorb sequential regularities when passively viewing or listening to streams of stimuli — a phenomenon often referred to as “statistical learning.” Computational and conceptual models of this phenomenon rely on a variety of formalisms, such as simple recurrent networks, Bayesian inference, or algebraic rule induction. While models differ dramatically with respect to the algorithms they propose to be at work, they share an implementational assumption that has not been well explored. Specifically, most models of statistical learning assume that the learner can somehow recover the identity or category of the stimuli in real time during learning. That is, in order to learn that “B” follows “A” with some regularity, the learner must be able to identify — or at least label — “A” and “B” rapidly and accurately enough to encode the sequence in a format that makes this regularity available to whatever statistical or symbolic operation is proposed to discover it. But in many cases it is not clear whether participants are capable of encoding the stimulus with the appropriate fidelity to accomplish this task in the way these models propose. Six month old infants certainly recognize familiar words in some experimental contexts, but it is difficult to establish that they can correctly recognize unfamiliar syllables spoken in monotone at an unvarying rate of about three per second. Yet they show statistical learning under just these conditions. An alternative to assuming that stimuli are identified or labeled is to assume that participants are aware of how they are situated with respect to one another in a perceptual similarity space. Learning of sequences, then, can be thought of as learning about the likelihood of different “trajectories” through this space. I will present some data from initial explorations generated by taking this approach.lly.
Jason Zevin is Associate Professor of Psychology and Linguistics at the University of Southern California and Senior Scientist at Haskins Laboratories. His work combines behavioral, computational, and neuroimaging approaches to study basic mechanisms in reading and speech perception. In research on reading, he has recently focused on asking whether the same functional architecture can be applied to understand reading in different orthographic systems. With respect to speech perception, he has studied the perception of speech contrasts by non-native listeners, and, increasingly, is trying to connect the difficulties observed in laboratory perceptual tests with online comprehension in more ecologically valid contexts.