Generally Intelligent
Technical discussions with deep learning researchers who study how to build intelligence. Made for researchers, by researchers.
Show episodes
Episode 34: Seth Lazar, Australian National University: On legitimate power, moral nuance, and the political philosophy of AI
Seth Lazar is a professor of philosophy at the Australian National University, where he leads the Machine Intelligence and Normative Theory (MINT) Lab. His unique perspective bridges moral and political philosophy with AI, introducing much-needed rigor to the question of what will make for a good and just AI future. Ge
Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference
Tri Dao is a PhD student at Stanford, co-advised by Stefano Ermon and Chris Re. He’ll be joining Princeton as an assistant professor next year. He works at the intersection of machine learning and systems, currently focused on efficient training and long-range context. About Generally Intelligent We started Generally
Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize
Jamie Simon is a 4th year Ph.D. student at UC Berkeley advised by Mike DeWeese, and also a Research Fellow with us at Generally Intelligent. He uses tools from theoretical physics to build fundamental understanding of deep neural networks so they can be designed from first-principles. In this episode, we discuss revers
Episode 31: Bill Thompson, UC Berkeley, on how cultural evolution shapes knowledge acquisition
Bill Thompson is a cognitive scientist and an assistant professor at UC Berkeley. He runs an experimental cognition laboratory where he and his students conduct research on human language and cognition using large-scale behavioral experiments, computational modeling, and machine learning. In this episode, we explore th
Episode 30: Ben Eysenbach, CMU, on designing simpler and more principled RL algorithms
Ben Eysenbach is a PhD student from CMU and a student researcher at Google Brain. He is co-advised by Sergey Levine and Ruslan Salakhutdinov and his research focuses on developing RL algorithms that get state-of-the-art performance while being more simple, scalable, and robust. Recent problems he’s tackled include long
Episode 29: Jim Fan, NVIDIA, on foundation models for embodied agents, scaling data, and why prompt engineering will become irrelevant
Jim Fan is a research scientist at NVIDIA and got his PhD at Stanford under Fei-Fei Li. Jim is interested in building generally capable autonomous agents, and he recently published MineDojo, a massively multiscale benchmarking suite built on Minecraft, which was an Outstanding Paper at NeurIPS. In this episode, we disc