Power and Responsibility of Large Language Models | Safety & Ethics | OpenAI Model Spec + RLHF | Anthropic Constitutional AI | Episode 27

17 Jun 2024 • 16 min • EN
16 min
00:00
16:38
No file found

With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey powers to maximize usefulness and harmlessness. REFERENCE OpenAI Model Spec https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview Anthropic Constitutional AI https://www.anthropic.com/news/claudes-constitution For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.

From "Super Prompt: The Generative AI Podcast"

Listen on your iPhone

Download our iOS app and listen to interviews anywhere. Enjoy all of the listener functions in one slick package. Why not give it a try?

App Store Logo
application screenshot

Popular categories