915: How to Jailbreak LLMs (and How to Prevent It), with Michelle Yi

19 Aug 2025 • 69 min • EN
69 min
00:00
01:09:33
No file found

Tech leader, investor, and Generationship cofounder Michelle Yi talks to Jon Krohn about finding ways to trust and secure AI systems, the methods that hackers use to jailbreak code, and what users can do to build their own trustworthy AI systems. Learn all about “red teaming” and how tech teams can handle other key technical terms like data poisoning, prompt stealing, jailbreaking and slop squatting.  This episode is brought to you by ⁠Trainium2, the latest AI chip from AWS⁠ and by the ⁠Dell AI Factory with NVIDIA⁠. Additional materials: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.superdatascience.com/915⁠⁠⁠⁠⁠ Interested in sponsoring a SuperDataScience Podcast episode? Email natalie@superdatascience.com for sponsorship information. In this episode you will learn: (03:31) What “trustworthy AI” means      (31:15) How to build trustworthy AI systems  (46:55) About Michelle’s “sorry bench”   (48:13) How LLMs help construct causal graphs   (51:45) About Generationship 

From "Super Data Science: ML & AI Podcast with Jon Krohn"

Listen on your iPhone

Download our iOS app and listen to interviews anywhere. Enjoy all of the listener functions in one slick package. Why not give it a try?

App Store Logo
application screenshot

Popular categories