Dwarkesh Patel's Interviews
Grant Sanderson (3Blue1Brown) - Past, Present, & Future of Mathematics
I had a lot of fun chatting with Grant Sanderson (who runs the excellent 3Blue1Brown YouTube channel) about: - Whether advanced math requires AGI - What careers should mathematically talented students pursue - Why Grant plans on doing a stint as a high school teacher - Tips for self teaching - Does Godel’s incompletene
Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future
The second half of my 7 hour conversation with Carl Shulman is out! My favorite part! And the one that had the biggest impact on my worldview. Here, Carl lays out how an AI takeover might happen: * AI can threaten mutually assured destruction from bioweapons, * use cyber attacks to take over physical infrastructure, *
Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment
In terms of the depth and range of topics, this episode is the best I’ve done. No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of. We ended up talking for 8 hours, so I'm splitting this episode into 2 parts. This part is about Carl’s model o
Richard Rhodes - Making of Atomic Bomb, AI, WW2, Oppenheimer, & Abolishing Nukes
It was a tremendous honor & pleasure to interview Richard Rhodes, Pulitzer Prize winning author of The Making of the Atomic Bomb We discuss - similarities between AI progress & Manhattan Project (developing a powerful, unprecedented, & potentially apocalyptic technology within an uncertain arms-race situation) - visiti
Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong. We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more. If you want to get to the crux of the conve
Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment
I went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about: * time to AGI * leaks and spies * what's after generative models * post AGI futures * working with Microsoft and competing with Google * difficulty of aligning superhuman AI Watch on YouTube. Li
Podcasts with Dwarkesh Patel
Possible Pairings
You might also be interested in
Will Clemente
Journalist
Luigi Zingales
Economist
Darius Dale
Business executive
Dylan LeClair
Business executive
Jim O'Shaughnessy
Entrepreneur
Peter Schiff
Author
Anthony Pompliano
Entrepreneur
Rob Henderson
Psychologist
Polina Marinova Pompliano
Author
David Perell
Essayist
Peter McCormack
Interview host
Nic Carter
Investor
Shout-outs
Add shout-out