Dwarkesh Patel's Interviews
AI Scaling, Alignment, and the Path to Superintelligence — With Dwarkesh Patel
Dwarkesh Patel is the host of the Dwarkesh Podcast, where he's interviewed Mark Zuckerberg, Ilya Sustkever, Dario Amodei, and more AI leaders. Patel joins Big Technology to discuss the current state and future trajectory of AI development, including the potential for artificial general intelligence (AGI) and superintel
Grant Sanderson (3Blue1Brown) - Past, Present, & Future of Mathematics
I had a lot of fun chatting with Grant Sanderson (who runs the excellent 3Blue1Brown YouTube channel) about: - Whether advanced math requires AGI - What careers should mathematically talented students pursue - Why Grant plans on doing a stint as a high school teacher - Tips for self teaching - Does Godel’s incompletene
Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future
The second half of my 7 hour conversation with Carl Shulman is out! My favorite part! And the one that had the biggest impact on my worldview. Here, Carl lays out how an AI takeover might happen: * AI can threaten mutually assured destruction from bioweapons, * use cyber attacks to take over physical infrastructure, *
Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment
In terms of the depth and range of topics, this episode is the best I’ve done. No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of. We ended up talking for 8 hours, so I'm splitting this episode into 2 parts. This part is about Carl’s model o
Richard Rhodes - Making of Atomic Bomb, AI, WW2, Oppenheimer, & Abolishing Nukes
It was a tremendous honor & pleasure to interview Richard Rhodes, Pulitzer Prize winning author of The Making of the Atomic Bomb We discuss - similarities between AI progress & Manhattan Project (developing a powerful, unprecedented, & potentially apocalyptic technology within an uncertain arms-race situation) - visiti
Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong. We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more. If you want to get to the crux of the conve
Podcasts with Dwarkesh Patel
Possible Pairings
You might also be interested in
Will Clemente
Journalist
Darius Dale
Business executive
Luigi Zingales
Economist
Dylan LeClair
Business executive
Jim O'Shaughnessy
Entrepreneur
Peter Schiff
Author
Anthony Pompliano
Entrepreneur
Polina Marinova Pompliano
Author
Nic Carter
Investor
Peter McCormack
Interview host
Rob Henderson
Psychologist
Thomas Carrigan
Shout-outs
Add shout-out