
Why Building Superintelligence Means Human Extinction (with Nate Soares)
Nate Soares is president of the Machine Intelligence Research Institute. He joins the podcast to discuss his new book "If Anyone Builds It, Everyone Dies," co-authored with Eliezer Yudkowsky. We explore why current AI systems are "grown not crafted," making them unpredictable and difficult to control. The conversation covers threshold effects in intelligence, why computer security analogies suggest AI alignment is currently nearly impossible, and why we don't get retries with superintelligence. Soares argues for an international ban on AI research toward superintelligence. LINKS: If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.com Machine Intelligence Research Institute - https://intelligence.org Nate Soares - https://intelligence.org/team/nate-soares/ PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) Episode Preview (01:05) Introduction and Book Discussion (03:34) Psychology of AI Alarmism (07:52) Intelligence Threshold Effects (11:38) Growing vs Crafting AI (18:23) Illusion of AI Control (26:45) Why Iteration Won't Work (34:35) The No Retries Problem (38:22) Computer Security Lessons (49:13) The Cursed Problem (59:32) Multiple Curses and Complications (01:09:44) AI's Infrastructure Advantage (01:16:26) Grading Humanity's Response (01:22:55) Time Needed for Solutions (01:32:07) International Ban Necessity SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
From "Future of Life Institute Podcast"
Comments
Add comment Feedback