Gus Docker & Roman Yampolskiy , Future of Life Institute Podcast

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

20 Mar 2021 • 72 min • EN
72 min
00:00
01:12:01
No file found

Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.  Topics discussed in this episode include: -Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI -The relationship between AI safety, control, and alignment -Virtual worlds as a proposal for solving multi-multi alignment -AI security You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro  2:35 Roman’s primary research interests  4:09 How theoretical proofs help AI safety research  6:23 How impossibility results constrain computer science systems 10:18 The inability to tell if arbitrary code is friendly or unfriendly  12:06 Impossibility results clarify what we can do  14:19 Roman’s results on unexplainability and incomprehensibility  22:34 Focusing on comprehensibility  26:17 Roman’s results on uncontrollability  28:33 Alignment as a subset of safety and control  30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment  33:40 What does it mean to solve AI safety?  34:19 What do the impossibility results really mean?  37:07 Virtual worlds and AI alignment  49:55 AI security and malevolent agents  53:00 Air gapping, boxing, and other security methods  58:43 Some examples of historical failures of AI systems and what we can learn from them  1:01:20 Clarifying impossibility results 1:06 55 Examples of systems failing and what these demonstrate about AI  1:08:20 Are oracles a valid approach to AI safety?  1:10:30 Roman’s final thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

From "Future of Life Institute Podcast"

Listen on your iPhone

Download our iOS app and listen to interviews anywhere. Enjoy all of the listener functions in one slick package. Why not give it a try?

App Store Logo
application screenshot

Popular categories