Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI
Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more. Topics discussed in this episode include: -The historical and intellectual foundations of AI -How AI systems achieve or do not achieve intelligence in the same way as the human mind -The rise of AI and what it signifies -The benefits and risks of AI in both the short and long term -Whether superintelligent AI will pose an existential risk to humanity You can find the page for this podcast here: https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/ You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/ Timestamps: 0:00 Intro 4:30 The historical and intellectual foundations of AI 11:11 Moving beyond dualism 13:16 Regarding the objectives of an agent as fixed 17:20 The distinction between artificial intelligence and deep learning 22:00 How AI systems achieve or do not achieve intelligence in the same way as the human mind 49:46 What changes to human society does the rise of AI signal? 54:57 What are the benefits and risks of AI? 01:09:38 Do superintelligent AI systems pose an existential threat to humanity? 01:51:30 Where to find and follow Steve and Stuart This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
From "Future of Life Institute Podcast"
Comments
Add comment Feedback