Carl Shulman's Interviews
This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order! If we develop artificial general intelligence that's reasonably aligned with human goals, it could put a fast and near-free superhuman advisor
#191 (Part 1) – Carl Shulman on the economy and national security after AGI
This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order! The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would hap
#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications
Rebroadcast: this episode was originally released in October 2021. Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation. But the policy of US government agencies is already to spen
Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future
The second half of my 7 hour conversation with Carl Shulman is out! My favorite part! And the one that had the biggest impact on my worldview. Here, Carl lays out how an AI takeover might happen: * AI can threaten mutually assured destruction from bioweapons, * use cyber attacks to take over physical infrastructure, *
Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment
In terms of the depth and range of topics, this episode is the best I’ve done. No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of. We ended up talking for 8 hours, so I'm splitting this episode into 2 parts. This part is about Carl’s model o
#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications
Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation. But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death
Podcasts with Carl Shulman
Possible Pairings
You might also be interested in
Will MacAskill
Author
Dwarkesh Patel
Journalist
Robin Hanson
Economist
Bryan Caplan
Economist
Julia Galef
Interview host
Marc Andreessen
Investor
Tyler Cowen
Economist
Brian Christian
Author
Russ Roberts
Interview host
Ilya Sutskever
Business executive
Anthony Pompliano
Entrepreneur
Garett Jones
Economist
Shout-outs
Add shout-out