
Future of Life Institute Podcast
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Show episodes
Ann Pace joins the podcast to discuss the work of Wise Ancestors. We explore how biobanking could help humanity recover from global catastrophes, how to conduct decentralized science, and how to collaborate with local communities on conservation efforts. You can learn more about Ann's work here: https://www.wisea
Fr. Michael Baggot joins the podcast to provide a Catholic perspective on transhumanism and superintelligence. We also discuss the meta-narratives, the value of cultural diversity in attitudes toward technology, and how Christian communities deal with advanced AI. You can learn more about Michael's work here: http
David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware. You can learn more about David's
Nick Allardice joins the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters. Learn more about Nick's work here: https://www.nickallardice.com Timestamps: 00:00 What is GiveDirectly? 15:04 AI for targeting cash transfers 29:39 AI for predicting natural disasters 46:0
Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4. You can find Nathan's podcast here: https://www.cognitiverevolution.ai Timestamps: 00:00 AI progress since GPT-4 10:50 Multimodality 19:06 Low-cost models 27:58 Coding versus medicine/law 36:09 AI
Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this. Here's the document we discuss