Future of Life Institute Podcast

Updated: 11 Jul 2024 • 209 episodes
www.futureoflife.org

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Categories:

Show episodes

Emilia Javorsky joins the podcast to discuss AI-driven power concentration and how we might mitigate it. We also discuss optimism, utopia, and cultural experimentation.  Apply for our RFP here:   https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/ Timestamps:  00:00 Power concentration   07:4

63 min
00:00
01:03:35
No file found

Anton Korinek joins the podcast to discuss the effects of automation on wages and labor, how we measure the complexity of tasks, the economics of an intelligence explosion, and the market structure of the AI industry. Learn more about Anton's work at https://www.korinek.com   Timestamps:  00:00 Automation and wages  14

92 min
00:00
01:32:24
No file found

Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse. Find out more about Christian's work at https://www.founderspledge.com

96 min
00:00
01:36:01
No file found

Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation. You can learn more about Christian's work at https://now.org and about the Ban Deepfakes campaign at https://bandeepfakes.org 

37 min
00:00
37:12
No file found

Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at https://danfaggella.com Timestamps: 00:00 Value differences in AI 12

105 min
00:00
01:45:20
No file found

Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals?

86 min
00:00
01:26:30
No file found