Paul Middlebrooks & Andrea Martin , Brain Inspired

BI 169 Andrea Martin: Neural Dynamics and Language

28 Jun 2023 • 101 min • EN
101 min
00:00
01:41:30
No file found

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we can measure from human brains. Her theoretical model of language, among other things, brings in the idea of low-dimensional manifolds and neural dynamics along those manifolds. We've discussed manifolds a lot on the podcast, but they are a kind of abstract structure in the space of possible neural population activity - the neural dynamics. And that manifold structure defines the range of possible trajectories, or pathways, the neural dynamics can take over  time. One of Andrea's ideas is that manifolds might be a way for the brain to combine two properties of how we learn and use language. One of those properties is the statistical regularities found in language - a given word, for example, occurs more often near some words and less often near some other words. This statistical approach is the foundation of how large language models are trained. The other property is the more formal structure of language: how it's arranged and organized in such a way that gives it meaning to us. Perhaps these two properties of language can come together as a single trajectory along a neural manifold. But she has lots of ideas, and we discuss many of them. And of course we discuss large language models, and how Andrea thinks of them with respect to biological cognition. We talk about modeling in general and what models do and don't tell us, and much more. Andrea's website. Twitter: @andrea_e_martin. Related papers A Compositional Neural Architecture for Language An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions Neural dynamics differentially encode phrases and sentences during spoken language comprehension Hierarchical structure in language and action: A formal comparison Andrea mentions this book: The Geometry of Biological Time. Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we can measure from human brains. Her theoretical model of language, among other things, brings in the idea of low-dimensional manifolds and neural dynamics along those manifolds. We've discussed manifolds a lot on the podcast, but they are a kind of abstract structure in the space of possible neural populat

From "Brain Inspired"

Listen on your iPhone

Download our iOS app and listen to interviews anywhere. Enjoy all of the listener functions in one slick package. Why not give it a try?

App Store Logo
application screenshot

Popular categories