Future of Life Institute Podcast

Updated: 20 Jun 2025 • 237 episodes
www.futureoflife.org

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Categories:

Show episodes

Ed Newton-Rex joins me to discuss the issue of AI models trained on copyrighted data, and how we might develop fairer approaches that respect human creators. We talk about AI-generated music, Ed’s decision to resign from Stability AI, the industry’s attitude towards rights, authenticity in AI-generated art, and what th

87 min
00:00
01:27:14
No file found

On this episode, Sarah Hastings-Woodhouse joins me to discuss what benchmarks actually measure, AI’s development trajectory in comparison to other technologies, tasks that AI systems can and cannot handle, capability profiles of present and future AIs, the notion of alignment by default, and the leading AI companies’ v

75 min
00:00
01:15:49
No file found

On this episode, Michael Nielsen joins me to discuss how humanity's growing understanding of nature poses dual-use challenges, whether existing institutions and governance frameworks can adapt to handle advanced AI safely, and how we might recognize signs of dangerous AI. We explore the distinction between AI as agents

61 min
00:00
01:01:28
No file found
Gus Docker & Ben Goertzel 23 May 2025 • EN

Facing Superintelligence (with Ben Goertzel)

On this episode, Ben Goertzel joins me to discuss what distinguishes the current AI boom from previous ones, important but overlooked AI research, simplicity versus complexity in the first AGI, the feasibility of alignment, benchmarks and economic impact, potential bottlenecks to superintelligence, and what humanity sh

92 min
00:00
01:32:33
No file found

On this episode, Jeff Sebo joins me to discuss artificial consciousness, substrate-independence, possible tensions between AI risk and AI consciousness, the relationship between consciousness and cognitive complexity, and how intuitive versus intellectual approaches guide our understanding of these topics. We also disc

94 min
00:00
01:34:27
No file found

On this episode, Zvi Mowshowitz joins me to discuss sycophantic AIs, bottlenecks limiting autonomous AI agents, and the true utility of benchmarks in measuring progress. We then turn to time horizons of AI agents, the impact of automating scientific research, and constraints on scaling inference compute. Zvi also addre

95 min
00:00
01:35:09
No file found