80,000 Hours Podcast

Updated: 03 Dec 2025 • 307 episodes
80000hours.org/podcast

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.

Show episodes

We often worry about AI models “hallucinating” or making honest mistakes. But what happens when a model knows the truth, but decides to deceive you anyway to achieve a goal of its own? This isn’t sci-fi — it’s happening regularly in deployment today. Marius Hobbhahn, CEO of the world’s top research organisation focused

183 min
00:00
03:03:18
No file found

Global fertility rates aren’t just falling: the rate of decline is accelerating. From 2006 to 2016, fertility dropped gradually, but since 2016 the rate of decline has increased 4.5-fold. In many wealthy countries, fertility is now below 1.5. While we don’t notice it yet, in time that will mean the population halves ev

119 min
00:00
01:59:09
No file found

If you work in AI, you probably think it’s going to boost productivity, create wealth, advance science, and improve your life. If you’re a member of the American public, you probably strongly disagree. In three major reports released over the last year, the Pew Research Center surveyed over 5,000 US adults and 1,000 AI

103 min
00:00
01:43:24
No file found

Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and kicking. The for-profit’s trouble was that the entire operation was founded on the premise of — and legally pledged to —

116 min
00:00
01:56:06
No file found

With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they’ll deploy AI, including in the military, without coming to blows. But according to Helen Toner, director of the Center for Security and Emerging Technology in DC, “the US and Chinese gov

140 min
00:00
02:20:02
No file found

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating and low feedback. According to Anthropic’s Holden Karnofsky, this situation has now reversed completely. There are now lar

270 min
00:00
04:30:19
No file found