Future of Life Institute Podcast

Updated: 05 Dec 2024 • 219 episodes
www.futureoflife.org

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Categories:

Show episodes

Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4.  You can find Nathan's podcast here: https://www.cognitiverevolution.ai    Timestamps:  00:00 AI progress since GPT-4   10:50 Multimodality   19:06 Low-cost models   27:58 Coding versus medicine/law   36:09 AI

200 min
00:00
03:20:04
No file found
Gus Docker & Connor Leahy 22 Nov 2024 • EN

Connor Leahy on Why Humanity Risks Extinction from AGI

Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.    Here's the document we discuss

118 min
00:00
01:58:50
No file found

Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated world.    Here's Writing Do

63 min
00:00
01:03:08
No file found

Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI development, and what a mat

88 min
00:00
01:28:09
No file found

Tamay Besiroglu joins the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute. Here's the report we discuss in the episode:   https://epochai.org/blog/can-ai-scaling-continue-thro

90 min
00:00
01:30:29
No file found

Ryan Greenblatt joins the podcast to discuss AI control, timelines, takeoff speeds, misalignment, and slowing down around human-level AI.  You can learn more about Ryan's work here: https://www.redwoodresearch.org/team/ryan-greenblatt   Timestamps:  00:00 AI control   09:35 Challenges to AI control   23:48 AI control a

128 min
00:00
02:08:44
No file found